00:00:00.001 Started by upstream project "autotest-nightly" build number 3333 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 2727 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.104 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.106 The recommended git tool is: git 00:00:00.106 using credential 00000000-0000-0000-0000-000000000002 00:00:00.114 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu20-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.135 Fetching changes from the remote Git repository 00:00:00.136 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.161 Using shallow fetch with depth 1 00:00:00.161 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.161 > git --version # timeout=10 00:00:00.177 > git --version # 'git version 2.39.2' 00:00:00.177 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.178 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.178 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.136 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.148 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.161 Checking out Revision 98d6b8327afc23a73b335b56c2817216b73f106d (FETCH_HEAD) 00:00:06.161 > git config core.sparsecheckout # timeout=10 00:00:06.173 > git read-tree -mu HEAD # timeout=10 00:00:06.192 > git checkout -f 98d6b8327afc23a73b335b56c2817216b73f106d # timeout=5 00:00:06.215 Commit message: "jenkins/jjb-config: Retab check_jenkins_labels.sh" 00:00:06.215 > git rev-list --no-walk 98d6b8327afc23a73b335b56c2817216b73f106d # timeout=10 00:00:06.307 [Pipeline] Start of Pipeline 00:00:06.322 [Pipeline] library 00:00:06.323 Loading library shm_lib@master 00:00:06.324 Library shm_lib@master is cached. Copying from home. 00:00:06.340 [Pipeline] node 00:00:06.350 Running on VM-host-SM16 in /var/jenkins/workspace/ubuntu20-vg-autotest 00:00:06.352 [Pipeline] { 00:00:06.366 [Pipeline] catchError 00:00:06.367 [Pipeline] { 00:00:06.381 [Pipeline] wrap 00:00:06.391 [Pipeline] { 00:00:06.398 [Pipeline] stage 00:00:06.400 [Pipeline] { (Prologue) 00:00:06.413 [Pipeline] echo 00:00:06.414 Node: VM-host-SM16 00:00:06.418 [Pipeline] cleanWs 00:00:06.427 [WS-CLEANUP] Deleting project workspace... 00:00:06.427 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.432 [WS-CLEANUP] done 00:00:06.576 [Pipeline] setCustomBuildProperty 00:00:06.637 [Pipeline] nodesByLabel 00:00:06.639 Found a total of 1 nodes with the 'sorcerer' label 00:00:06.647 [Pipeline] httpRequest 00:00:06.651 HttpMethod: GET 00:00:06.651 URL: http://10.211.11.40/jbp_98d6b8327afc23a73b335b56c2817216b73f106d.tar.gz 00:00:06.652 Sending request to url: http://10.211.11.40/jbp_98d6b8327afc23a73b335b56c2817216b73f106d.tar.gz 00:00:06.653 Response Code: HTTP/1.1 200 OK 00:00:06.653 Success: Status code 200 is in the accepted range: 200,404 00:00:06.653 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest/jbp_98d6b8327afc23a73b335b56c2817216b73f106d.tar.gz 00:00:07.589 [Pipeline] sh 00:00:07.868 + tar --no-same-owner -xf jbp_98d6b8327afc23a73b335b56c2817216b73f106d.tar.gz 00:00:07.886 [Pipeline] httpRequest 00:00:07.890 HttpMethod: GET 00:00:07.890 URL: http://10.211.11.40/spdk_3bec6cb2332024b091764b08a1ed629590cc0fd8.tar.gz 00:00:07.891 Sending request to url: http://10.211.11.40/spdk_3bec6cb2332024b091764b08a1ed629590cc0fd8.tar.gz 00:00:07.913 Response Code: HTTP/1.1 200 OK 00:00:07.914 Success: Status code 200 is in the accepted range: 200,404 00:00:07.914 Saving response body to /var/jenkins/workspace/ubuntu20-vg-autotest/spdk_3bec6cb2332024b091764b08a1ed629590cc0fd8.tar.gz 00:01:07.389 [Pipeline] sh 00:01:07.668 + tar --no-same-owner -xf spdk_3bec6cb2332024b091764b08a1ed629590cc0fd8.tar.gz 00:01:11.020 [Pipeline] sh 00:01:11.302 + git -C spdk log --oneline -n5 00:01:11.302 3bec6cb23 module/bdev: Fix -Werror=maybe-uninitialized instances under raid/* 00:01:11.302 cbeecee61 nvme: use array index to get pointer for MAKE_DIGEST_WORD 00:01:11.302 f8fe0c418 test/unit/lib/nvme: initialize qpair in test_nvme_allocate_request_null() 00:01:11.302 744b9950e app/spdk_dd: dd was freezing with empty input file and count/skip flags 00:01:11.302 156969520 lib/trace : Display names for user created threads 00:01:11.320 [Pipeline] writeFile 00:01:11.336 [Pipeline] sh 00:01:11.615 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:11.625 [Pipeline] sh 00:01:11.905 + cat autorun-spdk.conf 00:01:11.905 RUN_NIGHTLY=1 00:01:11.905 SPDK_TEST_UNITTEST=1 00:01:11.905 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.905 SPDK_TEST_NVME=1 00:01:11.905 SPDK_TEST_BLOCKDEV=1 00:01:11.905 SPDK_RUN_ASAN=1 00:01:11.905 SPDK_RUN_UBSAN=1 00:01:11.905 SPDK_TEST_RAID5=1 00:01:11.912 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:11.914 [Pipeline] } 00:01:11.932 [Pipeline] // stage 00:01:11.947 [Pipeline] stage 00:01:11.949 [Pipeline] { (Run VM) 00:01:11.962 [Pipeline] sh 00:01:12.242 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:12.242 + echo 'Start stage prepare_nvme.sh' 00:01:12.242 Start stage prepare_nvme.sh 00:01:12.242 + [[ -n 4 ]] 00:01:12.242 + disk_prefix=ex4 00:01:12.242 + [[ -n /var/jenkins/workspace/ubuntu20-vg-autotest ]] 00:01:12.242 + [[ -e /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf ]] 00:01:12.242 + source /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf 00:01:12.242 ++ RUN_NIGHTLY=1 00:01:12.242 ++ SPDK_TEST_UNITTEST=1 00:01:12.242 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.242 ++ SPDK_TEST_NVME=1 00:01:12.242 ++ SPDK_TEST_BLOCKDEV=1 00:01:12.242 ++ SPDK_RUN_ASAN=1 00:01:12.242 ++ SPDK_RUN_UBSAN=1 00:01:12.242 ++ SPDK_TEST_RAID5=1 00:01:12.242 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.242 + cd /var/jenkins/workspace/ubuntu20-vg-autotest 00:01:12.242 + nvme_files=() 00:01:12.242 + declare -A nvme_files 00:01:12.242 + backend_dir=/var/lib/libvirt/images/backends 00:01:12.242 + nvme_files['nvme.img']=5G 00:01:12.242 + nvme_files['nvme-cmb.img']=5G 00:01:12.242 + nvme_files['nvme-multi0.img']=4G 00:01:12.242 + nvme_files['nvme-multi1.img']=4G 00:01:12.242 + nvme_files['nvme-multi2.img']=4G 00:01:12.242 + nvme_files['nvme-openstack.img']=8G 00:01:12.242 + nvme_files['nvme-zns.img']=5G 00:01:12.242 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:12.242 + (( SPDK_TEST_FTL == 1 )) 00:01:12.242 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:12.242 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:12.242 + for nvme in "${!nvme_files[@]}" 00:01:12.242 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:12.242 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.242 + for nvme in "${!nvme_files[@]}" 00:01:12.242 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:12.242 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.242 + for nvme in "${!nvme_files[@]}" 00:01:12.242 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:12.242 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:12.242 + for nvme in "${!nvme_files[@]}" 00:01:12.242 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:12.242 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.242 + for nvme in "${!nvme_files[@]}" 00:01:12.242 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:12.242 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.242 + for nvme in "${!nvme_files[@]}" 00:01:12.242 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:12.242 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.242 + for nvme in "${!nvme_files[@]}" 00:01:12.242 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:12.501 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.501 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:12.501 + echo 'End stage prepare_nvme.sh' 00:01:12.501 End stage prepare_nvme.sh 00:01:12.513 [Pipeline] sh 00:01:12.793 + DISTRO=ubuntu2004 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:12.793 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -H -a -v -f ubuntu2004 00:01:12.793 00:01:12.793 DIR=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk/scripts/vagrant 00:01:12.793 SPDK_DIR=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk 00:01:12.793 VAGRANT_TARGET=/var/jenkins/workspace/ubuntu20-vg-autotest 00:01:12.793 HELP=0 00:01:12.793 DRY_RUN=0 00:01:12.793 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img, 00:01:12.793 NVME_DISKS_TYPE=nvme, 00:01:12.793 NVME_AUTO_CREATE=0 00:01:12.793 NVME_DISKS_NAMESPACES=, 00:01:12.793 NVME_CMB=, 00:01:12.793 NVME_PMR=, 00:01:12.793 NVME_ZNS=, 00:01:12.793 NVME_MS=, 00:01:12.793 NVME_FDP=, 00:01:12.793 SPDK_VAGRANT_DISTRO=ubuntu2004 00:01:12.793 SPDK_VAGRANT_VMCPU=10 00:01:12.793 SPDK_VAGRANT_VMRAM=12288 00:01:12.793 SPDK_VAGRANT_PROVIDER=libvirt 00:01:12.793 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:12.793 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:12.793 SPDK_OPENSTACK_NETWORK=0 00:01:12.793 VAGRANT_PACKAGE_BOX=0 00:01:12.793 VAGRANTFILE=/var/jenkins/workspace/ubuntu20-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:12.793 FORCE_DISTRO=true 00:01:12.793 VAGRANT_BOX_VERSION= 00:01:12.793 EXTRA_VAGRANTFILES= 00:01:12.793 NIC_MODEL=e1000 00:01:12.793 00:01:12.793 mkdir: created directory '/var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt' 00:01:12.793 /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt /var/jenkins/workspace/ubuntu20-vg-autotest 00:01:15.344 Bringing machine 'default' up with 'libvirt' provider... 00:01:15.609 ==> default: Creating image (snapshot of base box volume). 00:01:15.610 ==> default: Creating domain with the following settings... 00:01:15.610 ==> default: -- Name: ubuntu2004-20.04-1678329680-1737_default_1707807709_cc34738bff015d0a2901 00:01:15.610 ==> default: -- Domain type: kvm 00:01:15.610 ==> default: -- Cpus: 10 00:01:15.610 ==> default: -- Feature: acpi 00:01:15.610 ==> default: -- Feature: apic 00:01:15.610 ==> default: -- Feature: pae 00:01:15.610 ==> default: -- Memory: 12288M 00:01:15.610 ==> default: -- Memory Backing: hugepages: 00:01:15.610 ==> default: -- Management MAC: 00:01:15.610 ==> default: -- Loader: 00:01:15.610 ==> default: -- Nvram: 00:01:15.610 ==> default: -- Base box: spdk/ubuntu2004 00:01:15.610 ==> default: -- Storage pool: default 00:01:15.610 ==> default: -- Image: /var/lib/libvirt/images/ubuntu2004-20.04-1678329680-1737_default_1707807709_cc34738bff015d0a2901.img (20G) 00:01:15.610 ==> default: -- Volume Cache: default 00:01:15.610 ==> default: -- Kernel: 00:01:15.610 ==> default: -- Initrd: 00:01:15.610 ==> default: -- Graphics Type: vnc 00:01:15.610 ==> default: -- Graphics Port: -1 00:01:15.610 ==> default: -- Graphics IP: 127.0.0.1 00:01:15.610 ==> default: -- Graphics Password: Not defined 00:01:15.610 ==> default: -- Video Type: cirrus 00:01:15.610 ==> default: -- Video VRAM: 9216 00:01:15.610 ==> default: -- Sound Type: 00:01:15.610 ==> default: -- Keymap: en-us 00:01:15.610 ==> default: -- TPM Path: 00:01:15.610 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:15.610 ==> default: -- Command line args: 00:01:15.610 ==> default: -> value=-device, 00:01:15.610 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:15.610 ==> default: -> value=-drive, 00:01:15.610 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:15.610 ==> default: -> value=-device, 00:01:15.610 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:15.870 ==> default: Creating shared folders metadata... 00:01:15.870 ==> default: Starting domain. 00:01:17.776 ==> default: Waiting for domain to get an IP address... 00:01:32.654 ==> default: Waiting for SSH to become available... 00:01:32.654 ==> default: Configuring and enabling network interfaces... 00:01:37.923 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:43.193 ==> default: Mounting SSHFS shared folder... 00:01:43.452 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output => /home/vagrant/spdk_repo/output 00:01:43.452 ==> default: Checking Mount.. 00:01:45.987 ==> default: Checking Mount.. 00:01:45.987 ==> default: Folder Successfully Mounted! 00:01:45.987 ==> default: Running provisioner: file... 00:01:46.255 default: ~/.gitconfig => .gitconfig 00:01:46.529 00:01:46.529 SUCCESS! 00:01:46.529 00:01:46.529 cd to /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt and type "vagrant ssh" to use. 00:01:46.529 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:46.529 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt" to destroy all trace of vm. 00:01:46.529 00:01:46.538 [Pipeline] } 00:01:46.555 [Pipeline] // stage 00:01:46.563 [Pipeline] dir 00:01:46.564 Running in /var/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt 00:01:46.565 [Pipeline] { 00:01:46.578 [Pipeline] catchError 00:01:46.579 [Pipeline] { 00:01:46.592 [Pipeline] sh 00:01:46.870 + vagrant ssh-config --host vagrant 00:01:46.870 + sed -ne /^Host/,$p 00:01:46.870 + tee ssh_conf 00:01:51.069 Host vagrant 00:01:51.069 HostName 192.168.121.77 00:01:51.069 User vagrant 00:01:51.069 Port 22 00:01:51.069 UserKnownHostsFile /dev/null 00:01:51.069 StrictHostKeyChecking no 00:01:51.069 PasswordAuthentication no 00:01:51.069 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2004/20.04-1678329680-1737/libvirt/ubuntu2004 00:01:51.069 IdentitiesOnly yes 00:01:51.069 LogLevel FATAL 00:01:51.069 ForwardAgent yes 00:01:51.069 ForwardX11 yes 00:01:51.069 00:01:51.083 [Pipeline] withEnv 00:01:51.085 [Pipeline] { 00:01:51.101 [Pipeline] sh 00:01:51.379 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:51.379 source /etc/os-release 00:01:51.379 [[ -e /image.version ]] && img=$(< /image.version) 00:01:51.379 # Minimal, systemd-like check. 00:01:51.379 if [[ -e /.dockerenv ]]; then 00:01:51.379 # Clear garbage from the node's name: 00:01:51.379 # agt-er_autotest_547-896 -> autotest_547-896 00:01:51.379 agent=${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:51.379 if mountpoint -q /etc/hostname; then 00:01:51.379 # We can assume this is a mount from a host where container is running, 00:01:51.379 # so fetch its hostname to easily identify the target swarm worker. 00:01:51.379 container="$(< /etc/hostname) ($agent)" 00:01:51.379 else 00:01:51.379 # Fallback 00:01:51.379 container=$agent 00:01:51.379 fi 00:01:51.379 fi 00:01:51.379 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:51.379 00:01:52.014 [Pipeline] } 00:01:52.033 [Pipeline] // withEnv 00:01:52.042 [Pipeline] setCustomBuildProperty 00:01:52.056 [Pipeline] stage 00:01:52.058 [Pipeline] { (Tests) 00:01:52.077 [Pipeline] sh 00:01:52.356 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:52.937 [Pipeline] timeout 00:01:52.937 Timeout set to expire in 1 hr 0 min 00:01:52.939 [Pipeline] { 00:01:52.954 [Pipeline] sh 00:01:53.235 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:54.173 HEAD is now at 3bec6cb23 module/bdev: Fix -Werror=maybe-uninitialized instances under raid/* 00:01:54.186 [Pipeline] sh 00:01:54.464 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:55.045 [Pipeline] sh 00:01:55.324 + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu20-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:55.906 [Pipeline] sh 00:01:56.184 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:01:56.784 ++ readlink -f spdk_repo 00:01:56.784 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:56.784 + [[ -n /home/vagrant/spdk_repo ]] 00:01:56.784 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:56.784 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:56.784 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:56.784 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:56.784 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:56.784 + cd /home/vagrant/spdk_repo 00:01:56.784 + source /etc/os-release 00:01:56.784 ++ NAME=Ubuntu 00:01:56.784 ++ VERSION='20.04.5 LTS (Focal Fossa)' 00:01:56.784 ++ ID=ubuntu 00:01:56.784 ++ ID_LIKE=debian 00:01:56.784 ++ PRETTY_NAME='Ubuntu 20.04.5 LTS' 00:01:56.784 ++ VERSION_ID=20.04 00:01:56.784 ++ HOME_URL=https://www.ubuntu.com/ 00:01:56.784 ++ SUPPORT_URL=https://help.ubuntu.com/ 00:01:56.784 ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 00:01:56.784 ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 00:01:56.784 ++ VERSION_CODENAME=focal 00:01:56.784 ++ UBUNTU_CODENAME=focal 00:01:56.784 + uname -a 00:01:56.784 Linux ubuntu2004-cloud-1678329680-1737 5.4.0-144-generic #161-Ubuntu SMP Fri Feb 3 14:49:04 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux 00:01:56.784 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:57.042 Hugepages 00:01:57.042 node hugesize free / total 00:01:57.042 node0 1048576kB 0 / 0 00:01:57.042 node0 2048kB 0 / 0 00:01:57.042 00:01:57.042 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:57.042 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:57.042 NVMe 0000:00:06.0 1b36 0010 0 nvme nvme0 nvme0n1 00:01:57.042 + rm -f /tmp/spdk-ld-path 00:01:57.042 + source autorun-spdk.conf 00:01:57.042 ++ RUN_NIGHTLY=1 00:01:57.042 ++ SPDK_TEST_UNITTEST=1 00:01:57.042 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.042 ++ SPDK_TEST_NVME=1 00:01:57.042 ++ SPDK_TEST_BLOCKDEV=1 00:01:57.042 ++ SPDK_RUN_ASAN=1 00:01:57.042 ++ SPDK_RUN_UBSAN=1 00:01:57.042 ++ SPDK_TEST_RAID5=1 00:01:57.042 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.042 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:57.042 + [[ -n '' ]] 00:01:57.042 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:57.042 + for M in /var/spdk/build-*-manifest.txt 00:01:57.042 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:57.042 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:57.042 + for M in /var/spdk/build-*-manifest.txt 00:01:57.042 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:57.042 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:57.042 ++ uname 00:01:57.042 + [[ Linux == \L\i\n\u\x ]] 00:01:57.042 + sudo dmesg -T 00:01:57.042 + sudo dmesg --clear 00:01:57.042 + dmesg_pid=2371 00:01:57.042 + [[ Ubuntu == FreeBSD ]] 00:01:57.042 + sudo dmesg -Tw 00:01:57.042 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.042 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:57.042 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:57.042 + [[ -x /usr/src/fio-static/fio ]] 00:01:57.042 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:57.042 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:57.042 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:57.042 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:01:57.042 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:57.042 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:01:57.042 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:57.042 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:57.042 Test configuration: 00:01:57.042 RUN_NIGHTLY=1 00:01:57.042 SPDK_TEST_UNITTEST=1 00:01:57.042 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.042 SPDK_TEST_NVME=1 00:01:57.042 SPDK_TEST_BLOCKDEV=1 00:01:57.042 SPDK_RUN_ASAN=1 00:01:57.042 SPDK_RUN_UBSAN=1 00:01:57.042 SPDK_TEST_RAID5=1 00:01:57.042 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 07:02:30 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:57.042 07:02:30 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:57.042 07:02:30 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:57.042 07:02:30 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:57.042 07:02:30 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:57.042 07:02:30 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1707807750.XXXXXX 00:01:57.042 07:02:30 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1707807750.eWrUtB 00:01:57.042 07:02:30 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:57.042 07:02:30 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:57.042 07:02:30 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:57.042 07:02:30 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:57.042 07:02:30 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:57.042 07:02:30 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:57.042 07:02:30 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:01:57.042 07:02:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.301 07:02:30 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:01:57.301 07:02:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:57.301 07:02:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:57.301 07:02:30 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:57.301 07:02:30 -- spdk/autobuild.sh@16 -- $ date -u 00:01:57.301 Tue Feb 13 07:02:30 UTC 2024 00:01:57.301 07:02:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:57.301 v24.05-pre-72-g3bec6cb23 00:01:57.301 07:02:30 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:57.301 07:02:30 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:57.301 07:02:30 -- common/autotest_common.sh@1075 -- $ '[' 3 -le 1 ']' 00:01:57.301 07:02:30 -- common/autotest_common.sh@1081 -- $ xtrace_disable 00:01:57.301 07:02:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.301 ************************************ 00:01:57.301 START TEST asan 00:01:57.301 ************************************ 00:01:57.301 using asan 00:01:57.301 ************************************ 00:01:57.301 END TEST asan 00:01:57.301 ************************************ 00:01:57.301 07:02:30 -- common/autotest_common.sh@1102 -- $ echo 'using asan' 00:01:57.301 00:01:57.301 real 0m0.000s 00:01:57.301 user 0m0.000s 00:01:57.301 sys 0m0.000s 00:01:57.301 07:02:30 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:57.301 07:02:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.301 07:02:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:57.301 07:02:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:57.301 07:02:30 -- common/autotest_common.sh@1075 -- $ '[' 3 -le 1 ']' 00:01:57.301 07:02:30 -- common/autotest_common.sh@1081 -- $ xtrace_disable 00:01:57.301 07:02:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.301 ************************************ 00:01:57.301 START TEST ubsan 00:01:57.301 ************************************ 00:01:57.301 using ubsan 00:01:57.301 ************************************ 00:01:57.301 END TEST ubsan 00:01:57.301 ************************************ 00:01:57.301 07:02:30 -- common/autotest_common.sh@1102 -- $ echo 'using ubsan' 00:01:57.301 00:01:57.301 real 0m0.000s 00:01:57.301 user 0m0.000s 00:01:57.301 sys 0m0.000s 00:01:57.301 07:02:30 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:57.301 07:02:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.301 07:02:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:57.301 07:02:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:57.301 07:02:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:57.301 07:02:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:57.301 07:02:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:57.301 07:02:30 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:01:57.301 07:02:30 -- spdk/autobuild.sh@58 -- $ unittest_build 00:01:57.301 07:02:30 -- common/autobuild_common.sh@411 -- $ run_test unittest_build _unittest_build 00:01:57.301 07:02:30 -- common/autotest_common.sh@1075 -- $ '[' 2 -le 1 ']' 00:01:57.301 07:02:30 -- common/autotest_common.sh@1081 -- $ xtrace_disable 00:01:57.301 07:02:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.301 ************************************ 00:01:57.301 START TEST unittest_build 00:01:57.301 ************************************ 00:01:57.301 07:02:30 -- common/autotest_common.sh@1102 -- $ _unittest_build 00:01:57.301 07:02:30 -- common/autobuild_common.sh@402 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --without-shared 00:01:57.301 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:57.301 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:57.868 Using 'verbs' RDMA provider 00:02:13.314 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:28.191 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:28.191 Creating mk/config.mk...done. 00:02:28.191 Creating mk/cc.flags.mk...done. 00:02:28.191 Type 'make' to build. 00:02:28.191 07:03:00 -- common/autobuild_common.sh@403 -- $ make -j10 00:02:28.191 make[1]: Nothing to be done for 'all'. 00:02:29.124 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.383 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.383 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.383 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.383 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.641 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.641 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.641 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.641 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.641 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.641 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.641 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.899 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.899 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.899 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.899 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.899 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.899 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:29.899 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.157 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.157 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.157 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.157 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.157 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.157 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.157 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.416 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.416 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.416 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.416 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.416 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.416 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.416 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.674 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.674 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.674 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.674 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.674 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.674 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.674 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.674 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.932 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.932 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.932 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.932 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:30.932 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.190 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.190 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.190 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.190 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.190 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.190 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.190 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.449 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.449 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.449 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.449 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.449 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.449 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.449 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.708 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.708 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.708 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.708 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.708 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.708 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.708 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.967 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.967 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.967 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.967 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.967 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.967 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.967 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:31.967 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.227 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.227 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.227 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.227 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.227 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.227 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.227 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.227 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.227 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.486 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.486 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.486 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.486 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.486 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.486 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.486 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.486 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.486 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.745 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.745 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.745 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.745 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.745 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.745 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:32.745 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.004 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.004 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.004 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.004 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.262 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.262 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.262 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.262 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.521 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.521 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.521 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.521 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.521 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.779 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.780 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.780 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.780 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:33.780 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.038 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.038 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.038 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.038 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.038 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.038 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.298 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.298 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.298 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.298 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.298 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.298 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.298 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.582 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.582 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.582 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.582 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.582 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.840 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.840 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.840 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.840 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.840 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:34.840 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.098 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.098 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.357 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.357 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.357 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.615 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.615 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.615 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.615 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.615 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.874 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.874 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.874 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.874 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.874 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:35.874 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.132 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.132 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.132 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.132 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.132 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.132 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.132 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.132 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.389 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.389 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.389 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.389 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.389 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.647 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.647 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.647 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.647 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.647 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.905 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.905 ./include//reg_sizes.asm:208: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:36.905 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.163 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.163 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.420 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.420 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.420 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.420 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.420 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.678 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.678 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.678 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.678 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.678 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.678 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.678 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.936 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.936 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.936 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:37.936 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.500 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.757 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.757 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.757 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:38.757 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.014 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.014 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.014 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.272 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.272 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.272 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.272 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.272 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.272 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.529 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.529 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.529 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.787 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:39.787 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.046 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.046 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.046 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.046 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.046 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.305 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.563 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.563 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.563 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.563 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.563 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.563 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.822 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.822 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.822 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.822 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.822 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:40.822 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.080 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.080 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.080 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.080 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.340 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.340 The Meson build system 00:02:41.340 Version: 1.0.1 00:02:41.340 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:41.340 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:41.340 Build type: native build 00:02:41.340 Program cat found: YES (/usr/bin/cat) 00:02:41.340 Project name: DPDK 00:02:41.340 Project version: 23.11.0 00:02:41.340 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0") 00:02:41.340 C linker for the host machine: cc ld.bfd 2.34 00:02:41.340 Host machine cpu family: x86_64 00:02:41.340 Host machine cpu: x86_64 00:02:41.340 Message: ## Building in Developer Mode ## 00:02:41.340 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:41.340 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:41.340 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:41.340 Program python3 found: YES (/usr/bin/python3) 00:02:41.340 Program cat found: YES (/usr/bin/cat) 00:02:41.340 Compiler for C supports arguments -march=native: YES 00:02:41.340 Checking for size of "void *" : 8 00:02:41.340 Checking for size of "void *" : 8 00:02:41.340 Library m found: YES 00:02:41.340 Library numa found: YES 00:02:41.340 Has header "numaif.h" : YES 00:02:41.340 Library fdt found: NO 00:02:41.340 Library execinfo found: NO 00:02:41.340 Has header "execinfo.h" : YES 00:02:41.340 Found pkg-config: /usr/bin/pkg-config (0.29.1) 00:02:41.340 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:41.340 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:41.340 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:41.340 Run-time dependency openssl found: YES 1.1.1f 00:02:41.340 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:41.340 Library pcap found: NO 00:02:41.340 Compiler for C supports arguments -Wcast-qual: YES 00:02:41.340 Compiler for C supports arguments -Wdeprecated: YES 00:02:41.340 Compiler for C supports arguments -Wformat: YES 00:02:41.340 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:41.340 Compiler for C supports arguments -Wformat-security: YES 00:02:41.340 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:41.340 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:41.340 Compiler for C supports arguments -Wnested-externs: YES 00:02:41.340 Compiler for C supports arguments -Wold-style-definition: YES 00:02:41.340 Compiler for C supports arguments -Wpointer-arith: YES 00:02:41.340 Compiler for C supports arguments -Wsign-compare: YES 00:02:41.340 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:41.340 Compiler for C supports arguments -Wundef: YES 00:02:41.340 Compiler for C supports arguments -Wwrite-strings: YES 00:02:41.340 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:41.340 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:41.340 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:41.340 Program objdump found: YES (/usr/bin/objdump) 00:02:41.340 Compiler for C supports arguments -mavx512f: YES 00:02:41.340 Checking if "AVX512 checking" compiles: YES 00:02:41.340 Fetching value of define "__SSE4_2__" : 1 00:02:41.340 Fetching value of define "__AES__" : 1 00:02:41.340 Fetching value of define "__AVX__" : 1 00:02:41.340 Fetching value of define "__AVX2__" : 1 00:02:41.340 Fetching value of define "__AVX512BW__" : 00:02:41.340 Fetching value of define "__AVX512CD__" : 00:02:41.340 Fetching value of define "__AVX512DQ__" : 00:02:41.340 Fetching value of define "__AVX512F__" : 00:02:41.340 Fetching value of define "__AVX512VL__" : 00:02:41.340 Fetching value of define "__PCLMUL__" : 1 00:02:41.340 Fetching value of define "__RDRND__" : 1 00:02:41.340 Fetching value of define "__RDSEED__" : 1 00:02:41.340 Fetching value of define "__VPCLMULQDQ__" : 00:02:41.340 Fetching value of define "__znver1__" : 00:02:41.340 Fetching value of define "__znver2__" : 00:02:41.340 Fetching value of define "__znver3__" : 00:02:41.340 Fetching value of define "__znver4__" : 00:02:41.340 Library asan found: YES 00:02:41.340 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:41.340 Message: lib/log: Defining dependency "log" 00:02:41.340 Message: lib/kvargs: Defining dependency "kvargs" 00:02:41.340 Message: lib/telemetry: Defining dependency "telemetry" 00:02:41.340 Library rt found: YES 00:02:41.340 Checking for function "getentropy" : NO 00:02:41.340 Message: lib/eal: Defining dependency "eal" 00:02:41.340 Message: lib/ring: Defining dependency "ring" 00:02:41.340 Message: lib/rcu: Defining dependency "rcu" 00:02:41.340 Message: lib/mempool: Defining dependency "mempool" 00:02:41.340 Message: lib/mbuf: Defining dependency "mbuf" 00:02:41.340 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:41.340 Fetching value of define "__AVX512F__" : (cached) 00:02:41.340 Compiler for C supports arguments -mpclmul: YES 00:02:41.340 Compiler for C supports arguments -maes: YES 00:02:41.340 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:41.340 Compiler for C supports arguments -mavx512bw: YES 00:02:41.340 Compiler for C supports arguments -mavx512dq: YES 00:02:41.340 Compiler for C supports arguments -mavx512vl: YES 00:02:41.340 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:41.340 Compiler for C supports arguments -mavx2: YES 00:02:41.340 Compiler for C supports arguments -mavx: YES 00:02:41.340 Message: lib/net: Defining dependency "net" 00:02:41.340 Message: lib/meter: Defining dependency "meter" 00:02:41.340 Message: lib/ethdev: Defining dependency "ethdev" 00:02:41.340 Message: lib/pci: Defining dependency "pci" 00:02:41.340 Message: lib/cmdline: Defining dependency "cmdline" 00:02:41.340 Message: lib/hash: Defining dependency "hash" 00:02:41.340 Message: lib/timer: Defining dependency "timer" 00:02:41.340 Message: lib/compressdev: Defining dependency "compressdev" 00:02:41.340 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:41.340 Message: lib/dmadev: Defining dependency "dmadev" 00:02:41.340 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:41.340 Message: lib/power: Defining dependency "power" 00:02:41.340 Message: lib/reorder: Defining dependency "reorder" 00:02:41.340 Message: lib/security: Defining dependency "security" 00:02:41.340 Has header "linux/userfaultfd.h" : YES 00:02:41.340 Has header "linux/vduse.h" : NO 00:02:41.340 Message: lib/vhost: Defining dependency "vhost" 00:02:41.340 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:41.340 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:41.340 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:41.340 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:41.340 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:41.340 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:41.340 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:41.340 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:41.340 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:41.340 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:41.340 Program doxygen found: YES (/usr/bin/doxygen) 00:02:41.340 Configuring doxy-api-html.conf using configuration 00:02:41.340 Configuring doxy-api-man.conf using configuration 00:02:41.340 Program mandb found: YES (/usr/bin/mandb) 00:02:41.340 Program sphinx-build found: NO 00:02:41.340 Configuring rte_build_config.h using configuration 00:02:41.340 Message: 00:02:41.340 ================= 00:02:41.340 Applications Enabled 00:02:41.340 ================= 00:02:41.340 00:02:41.340 apps: 00:02:41.340 00:02:41.340 00:02:41.340 Message: 00:02:41.340 ================= 00:02:41.340 Libraries Enabled 00:02:41.340 ================= 00:02:41.340 00:02:41.340 libs: 00:02:41.340 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:41.340 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:41.340 cryptodev, dmadev, power, reorder, security, vhost, 00:02:41.340 00:02:41.340 Message: 00:02:41.340 =============== 00:02:41.340 Drivers Enabled 00:02:41.340 =============== 00:02:41.340 00:02:41.340 common: 00:02:41.340 00:02:41.340 bus: 00:02:41.340 pci, vdev, 00:02:41.340 mempool: 00:02:41.340 ring, 00:02:41.340 dma: 00:02:41.340 00:02:41.340 net: 00:02:41.340 00:02:41.340 crypto: 00:02:41.340 00:02:41.340 compress: 00:02:41.340 00:02:41.340 vdpa: 00:02:41.340 00:02:41.340 00:02:41.340 Message: 00:02:41.340 ================= 00:02:41.340 Content Skipped 00:02:41.340 ================= 00:02:41.340 00:02:41.340 apps: 00:02:41.340 dumpcap: explicitly disabled via build config 00:02:41.340 graph: explicitly disabled via build config 00:02:41.340 pdump: explicitly disabled via build config 00:02:41.340 proc-info: explicitly disabled via build config 00:02:41.340 test-acl: explicitly disabled via build config 00:02:41.340 test-bbdev: explicitly disabled via build config 00:02:41.340 test-cmdline: explicitly disabled via build config 00:02:41.341 test-compress-perf: explicitly disabled via build config 00:02:41.341 test-crypto-perf: explicitly disabled via build config 00:02:41.341 test-dma-perf: explicitly disabled via build config 00:02:41.341 test-eventdev: explicitly disabled via build config 00:02:41.341 test-fib: explicitly disabled via build config 00:02:41.341 test-flow-perf: explicitly disabled via build config 00:02:41.341 test-gpudev: explicitly disabled via build config 00:02:41.341 test-mldev: explicitly disabled via build config 00:02:41.341 test-pipeline: explicitly disabled via build config 00:02:41.341 test-pmd: explicitly disabled via build config 00:02:41.341 test-regex: explicitly disabled via build config 00:02:41.341 test-sad: explicitly disabled via build config 00:02:41.341 test-security-perf: explicitly disabled via build config 00:02:41.341 00:02:41.341 libs: 00:02:41.341 metrics: explicitly disabled via build config 00:02:41.341 acl: explicitly disabled via build config 00:02:41.341 bbdev: explicitly disabled via build config 00:02:41.341 bitratestats: explicitly disabled via build config 00:02:41.341 bpf: explicitly disabled via build config 00:02:41.341 cfgfile: explicitly disabled via build config 00:02:41.341 distributor: explicitly disabled via build config 00:02:41.341 efd: explicitly disabled via build config 00:02:41.341 eventdev: explicitly disabled via build config 00:02:41.341 dispatcher: explicitly disabled via build config 00:02:41.341 gpudev: explicitly disabled via build config 00:02:41.341 gro: explicitly disabled via build config 00:02:41.341 gso: explicitly disabled via build config 00:02:41.341 ip_frag: explicitly disabled via build config 00:02:41.341 jobstats: explicitly disabled via build config 00:02:41.341 latencystats: explicitly disabled via build config 00:02:41.341 lpm: explicitly disabled via build config 00:02:41.341 member: explicitly disabled via build config 00:02:41.341 pcapng: explicitly disabled via build config 00:02:41.341 rawdev: explicitly disabled via build config 00:02:41.341 regexdev: explicitly disabled via build config 00:02:41.341 mldev: explicitly disabled via build config 00:02:41.341 rib: explicitly disabled via build config 00:02:41.341 sched: explicitly disabled via build config 00:02:41.341 stack: explicitly disabled via build config 00:02:41.341 ipsec: explicitly disabled via build config 00:02:41.341 pdcp: explicitly disabled via build config 00:02:41.341 fib: explicitly disabled via build config 00:02:41.341 port: explicitly disabled via build config 00:02:41.341 pdump: explicitly disabled via build config 00:02:41.341 table: explicitly disabled via build config 00:02:41.341 pipeline: explicitly disabled via build config 00:02:41.341 graph: explicitly disabled via build config 00:02:41.341 node: explicitly disabled via build config 00:02:41.341 00:02:41.341 drivers: 00:02:41.341 common/cpt: not in enabled drivers build config 00:02:41.341 common/dpaax: not in enabled drivers build config 00:02:41.341 common/iavf: not in enabled drivers build config 00:02:41.341 common/idpf: not in enabled drivers build config 00:02:41.341 common/mvep: not in enabled drivers build config 00:02:41.341 common/octeontx: not in enabled drivers build config 00:02:41.341 bus/auxiliary: not in enabled drivers build config 00:02:41.341 bus/cdx: not in enabled drivers build config 00:02:41.341 bus/dpaa: not in enabled drivers build config 00:02:41.341 bus/fslmc: not in enabled drivers build config 00:02:41.341 bus/ifpga: not in enabled drivers build config 00:02:41.341 bus/platform: not in enabled drivers build config 00:02:41.341 bus/vmbus: not in enabled drivers build config 00:02:41.341 common/cnxk: not in enabled drivers build config 00:02:41.341 common/mlx5: not in enabled drivers build config 00:02:41.341 common/nfp: not in enabled drivers build config 00:02:41.341 common/qat: not in enabled drivers build config 00:02:41.341 common/sfc_efx: not in enabled drivers build config 00:02:41.341 mempool/bucket: not in enabled drivers build config 00:02:41.341 mempool/cnxk: not in enabled drivers build config 00:02:41.341 mempool/dpaa: not in enabled drivers build config 00:02:41.341 mempool/dpaa2: not in enabled drivers build config 00:02:41.341 mempool/octeontx: not in enabled drivers build config 00:02:41.341 mempool/stack: not in enabled drivers build config 00:02:41.341 dma/cnxk: not in enabled drivers build config 00:02:41.341 dma/dpaa: not in enabled drivers build config 00:02:41.341 dma/dpaa2: not in enabled drivers build config 00:02:41.341 dma/hisilicon: not in enabled drivers build config 00:02:41.341 dma/idxd: not in enabled drivers build config 00:02:41.341 dma/ioat: not in enabled drivers build config 00:02:41.341 dma/skeleton: not in enabled drivers build config 00:02:41.341 net/af_packet: not in enabled drivers build config 00:02:41.341 net/af_xdp: not in enabled drivers build config 00:02:41.341 net/ark: not in enabled drivers build config 00:02:41.341 net/atlantic: not in enabled drivers build config 00:02:41.341 net/avp: not in enabled drivers build config 00:02:41.341 net/axgbe: not in enabled drivers build config 00:02:41.341 net/bnx2x: not in enabled drivers build config 00:02:41.341 net/bnxt: not in enabled drivers build config 00:02:41.341 net/bonding: not in enabled drivers build config 00:02:41.341 net/cnxk: not in enabled drivers build config 00:02:41.341 net/cpfl: not in enabled drivers build config 00:02:41.341 net/cxgbe: not in enabled drivers build config 00:02:41.341 net/dpaa: not in enabled drivers build config 00:02:41.341 net/dpaa2: not in enabled drivers build config 00:02:41.341 net/e1000: not in enabled drivers build config 00:02:41.341 net/ena: not in enabled drivers build config 00:02:41.341 net/enetc: not in enabled drivers build config 00:02:41.341 net/enetfec: not in enabled drivers build config 00:02:41.341 net/enic: not in enabled drivers build config 00:02:41.341 net/failsafe: not in enabled drivers build config 00:02:41.341 net/fm10k: not in enabled drivers build config 00:02:41.341 net/gve: not in enabled drivers build config 00:02:41.341 net/hinic: not in enabled drivers build config 00:02:41.341 net/hns3: not in enabled drivers build config 00:02:41.341 net/i40e: not in enabled drivers build config 00:02:41.341 net/iavf: not in enabled drivers build config 00:02:41.341 net/ice: not in enabled drivers build config 00:02:41.341 net/idpf: not in enabled drivers build config 00:02:41.341 net/igc: not in enabled drivers build config 00:02:41.341 net/ionic: not in enabled drivers build config 00:02:41.341 net/ipn3ke: not in enabled drivers build config 00:02:41.341 net/ixgbe: not in enabled drivers build config 00:02:41.341 net/mana: not in enabled drivers build config 00:02:41.341 net/memif: not in enabled drivers build config 00:02:41.341 net/mlx4: not in enabled drivers build config 00:02:41.341 net/mlx5: not in enabled drivers build config 00:02:41.341 net/mvneta: not in enabled drivers build config 00:02:41.341 net/mvpp2: not in enabled drivers build config 00:02:41.341 net/netvsc: not in enabled drivers build config 00:02:41.341 net/nfb: not in enabled drivers build config 00:02:41.341 net/nfp: not in enabled drivers build config 00:02:41.341 net/ngbe: not in enabled drivers build config 00:02:41.341 net/null: not in enabled drivers build config 00:02:41.341 net/octeontx: not in enabled drivers build config 00:02:41.341 net/octeon_ep: not in enabled drivers build config 00:02:41.341 net/pcap: not in enabled drivers build config 00:02:41.341 net/pfe: not in enabled drivers build config 00:02:41.341 net/qede: not in enabled drivers build config 00:02:41.341 net/ring: not in enabled drivers build config 00:02:41.341 net/sfc: not in enabled drivers build config 00:02:41.341 net/softnic: not in enabled drivers build config 00:02:41.341 net/tap: not in enabled drivers build config 00:02:41.341 net/thunderx: not in enabled drivers build config 00:02:41.341 net/txgbe: not in enabled drivers build config 00:02:41.341 net/vdev_netvsc: not in enabled drivers build config 00:02:41.341 net/vhost: not in enabled drivers build config 00:02:41.341 net/virtio: not in enabled drivers build config 00:02:41.341 net/vmxnet3: not in enabled drivers build config 00:02:41.341 raw/*: missing internal dependency, "rawdev" 00:02:41.341 crypto/armv8: not in enabled drivers build config 00:02:41.341 crypto/bcmfs: not in enabled drivers build config 00:02:41.341 crypto/caam_jr: not in enabled drivers build config 00:02:41.341 crypto/ccp: not in enabled drivers build config 00:02:41.341 crypto/cnxk: not in enabled drivers build config 00:02:41.341 crypto/dpaa_sec: not in enabled drivers build config 00:02:41.341 crypto/dpaa2_sec: not in enabled drivers build config 00:02:41.341 crypto/ipsec_mb: not in enabled drivers build config 00:02:41.341 crypto/mlx5: not in enabled drivers build config 00:02:41.341 crypto/mvsam: not in enabled drivers build config 00:02:41.341 crypto/nitrox: not in enabled drivers build config 00:02:41.341 crypto/null: not in enabled drivers build config 00:02:41.341 crypto/octeontx: not in enabled drivers build config 00:02:41.341 crypto/openssl: not in enabled drivers build config 00:02:41.341 crypto/scheduler: not in enabled drivers build config 00:02:41.341 crypto/uadk: not in enabled drivers build config 00:02:41.341 crypto/virtio: not in enabled drivers build config 00:02:41.341 compress/isal: not in enabled drivers build config 00:02:41.341 compress/mlx5: not in enabled drivers build config 00:02:41.341 compress/octeontx: not in enabled drivers build config 00:02:41.341 compress/zlib: not in enabled drivers build config 00:02:41.341 regex/*: missing internal dependency, "regexdev" 00:02:41.341 ml/*: missing internal dependency, "mldev" 00:02:41.341 vdpa/ifc: not in enabled drivers build config 00:02:41.341 vdpa/mlx5: not in enabled drivers build config 00:02:41.341 vdpa/nfp: not in enabled drivers build config 00:02:41.341 vdpa/sfc: not in enabled drivers build config 00:02:41.341 event/*: missing internal dependency, "eventdev" 00:02:41.341 baseband/*: missing internal dependency, "bbdev" 00:02:41.341 gpu/*: missing internal dependency, "gpudev" 00:02:41.341 00:02:41.341 00:02:41.341 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:41.622 Build targets in project: 85 00:02:41.622 00:02:41.622 DPDK 23.11.0 00:02:41.622 00:02:41.622 User defined options 00:02:41.622 buildtype : debug 00:02:41.622 default_library : static 00:02:41.622 libdir : lib 00:02:41.622 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:41.622 b_sanitize : address 00:02:41.622 c_args : -fPIC -Werror 00:02:41.622 c_link_args : 00:02:41.622 cpu_instruction_set: native 00:02:41.622 disable_apps : test-bbdev,test,pdump,test-sad,test-fib,test-dma-perf,test-acl,test-pipeline,test-eventdev,test-regex,test-mldev,test-security-perf,graph,proc-info,test-cmdline,test-crypto-perf,test-flow-perf,test-gpudev,test-pmd,dumpcap,test-compress-perf 00:02:41.622 disable_libs : gso,eventdev,ipsec,lpm,ip_frag,pdump,latencystats,pcapng,efd,gpudev,fib,rawdev,member,node,stack,bitratestats,pipeline,graph,mldev,gro,bbdev,cfgfile,metrics,rib,port,regexdev,table,bpf,pdcp,distributor,acl,sched,jobstats,dispatcher 00:02:41.622 enable_docs : false 00:02:41.622 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:41.622 enable_kmods : false 00:02:41.622 tests : false 00:02:41.622 00:02:41.622 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:41.622 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.213 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:42.213 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:42.213 [1/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:42.213 [2/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:42.213 [3/264] Linking static target lib/librte_log.a 00:02:42.471 [4/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:42.471 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:42.471 [6/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:42.471 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:42.471 [8/264] Linking static target lib/librte_kvargs.a 00:02:42.471 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:42.471 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:42.471 [11/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:42.471 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:42.729 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:42.729 [14/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:42.729 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:42.729 [16/264] Linking static target lib/librte_telemetry.a 00:02:42.987 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:42.987 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:42.987 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:43.246 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:43.246 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:43.246 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:43.246 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:43.504 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:43.504 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:43.504 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:43.504 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:43.762 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:43.762 [28/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.762 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:43.762 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:43.762 [31/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:43.762 [32/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:44.021 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:44.021 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:44.021 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:44.021 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.021 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.021 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:44.021 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:44.021 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:44.279 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:44.538 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:44.797 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:44.797 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:44.797 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:44.797 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:44.797 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:44.797 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:44.797 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:45.055 [47/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:45.055 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:45.055 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:45.055 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:45.055 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:45.055 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:45.314 [53/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.314 [54/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:45.314 [55/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:45.314 [56/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:45.314 [57/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:45.314 [58/264] Linking target lib/librte_log.so.24.0 00:02:45.314 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:45.573 [60/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.573 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:45.573 [62/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:45.573 [63/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:45.573 [64/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:45.831 [65/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:45.831 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:45.831 [66/264] Linking target lib/librte_kvargs.so.24.0 00:02:45.831 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:45.831 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:45.831 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:45.831 [70/264] Linking target lib/librte_telemetry.so.24.0 00:02:45.831 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:46.090 [72/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:46.090 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:46.090 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.090 [74/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:46.090 [75/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:46.090 [76/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:46.090 [77/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:46.090 [78/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:46.090 [79/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:46.348 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:46.348 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:46.348 [82/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:46.348 [83/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:46.914 [84/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:46.914 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.914 [85/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:46.914 [86/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:46.914 [87/264] Linking static target lib/librte_ring.a 00:02:46.914 [88/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:46.914 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:46.914 [89/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:46.914 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:47.206 [91/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:47.206 [92/264] Linking static target lib/librte_eal.a 00:02:47.206 [93/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:47.206 [94/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:47.465 [95/264] Linking static target lib/librte_mempool.a 00:02:47.465 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.465 [96/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:47.465 [97/264] Linking static target lib/librte_rcu.a 00:02:47.465 [98/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:47.723 [99/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:47.723 [100/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:47.723 [101/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:47.723 [102/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:47.723 [103/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:47.982 [104/264] Linking static target lib/librte_mbuf.a 00:02:47.982 [105/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:47.982 [106/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:47.982 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:47.982 [107/264] Linking static target lib/librte_net.a 00:02:47.982 [108/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:48.240 [109/264] Linking static target lib/librte_meter.a 00:02:48.240 [110/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.240 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.240 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.240 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:48.240 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:48.240 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:48.499 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.499 [114/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.757 [115/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.757 [116/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:48.757 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:48.757 [117/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.757 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:49.324 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:49.324 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.324 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:49.324 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:49.583 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.841 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:49.841 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:49.841 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:49.841 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:49.841 [125/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:49.841 [126/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:49.841 [127/264] Linking static target lib/librte_pci.a 00:02:49.841 [128/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:50.099 [129/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:50.099 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:50.099 [131/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:50.099 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:50.099 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:50.358 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:50.358 [134/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:50.358 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:50.358 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:50.358 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:50.358 [137/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:50.358 [138/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.358 [139/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.358 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:50.616 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:50.616 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:50.616 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:50.616 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:50.616 [144/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:50.616 [145/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.616 [146/264] Linking static target lib/librte_cmdline.a 00:02:50.875 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:50.875 [147/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:50.875 [148/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:50.875 [149/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:50.875 [150/264] Linking static target lib/librte_timer.a 00:02:51.133 [151/264] Linking static target lib/librte_ethdev.a 00:02:51.133 [152/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:51.133 [153/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:51.133 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.133 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.133 [154/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:51.392 [155/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:51.392 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.392 [156/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:51.392 [157/264] Linking static target lib/librte_compressdev.a 00:02:51.662 [158/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:51.662 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.662 [159/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:51.662 [160/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:51.662 [161/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:51.662 [162/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:51.662 [163/264] Linking static target lib/librte_hash.a 00:02:51.662 [164/264] Linking static target lib/librte_dmadev.a 00:02:51.662 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.662 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.928 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.928 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:51.928 [165/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.928 [166/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:51.928 [167/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:51.928 [168/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:51.928 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.187 [169/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:52.187 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.445 [170/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:52.445 [171/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:52.445 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.703 [172/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:52.703 [173/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:52.703 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.703 [174/264] Linking static target lib/librte_cryptodev.a 00:02:52.703 [175/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.703 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.703 [176/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:52.703 [177/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:52.703 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:52.961 [178/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.961 [179/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:52.961 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.220 [180/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.220 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.220 [181/264] Linking static target lib/librte_power.a 00:02:53.220 [182/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:53.220 [183/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.220 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.479 [184/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:53.479 [185/264] Linking static target lib/librte_reorder.a 00:02:53.479 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:53.479 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.479 [187/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.479 [188/264] Linking static target lib/librte_security.a 00:02:53.479 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.737 [189/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.738 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.738 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.738 [190/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:53.738 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:53.996 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:54.255 [191/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.255 [192/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:54.255 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:54.255 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:54.255 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:54.255 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:54.514 [193/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.514 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:54.514 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:54.514 [194/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:54.514 [195/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:54.514 [196/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:54.514 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:54.514 ./include//reg_sizes.asm:358: warning: Unknown section attribute 'note' ignored on declaration of section `.note.gnu.property' [-w+other] 00:02:54.773 [198/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.773 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:54.773 [200/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:55.032 [201/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:55.032 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:55.032 [203/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:55.032 [204/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:55.032 [205/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:55.032 [206/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:55.291 [207/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.291 [208/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:55.291 [209/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:55.291 [210/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:55.291 [211/264] Linking static target drivers/librte_bus_vdev.a 00:02:55.291 [212/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:55.291 [213/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:55.291 [214/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:55.291 [215/264] Linking static target drivers/librte_bus_pci.a 00:02:55.550 [216/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.550 [217/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:55.550 [218/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:55.809 [219/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:55.809 [220/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.809 [221/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.809 [222/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.809 [223/264] Linking static target drivers/librte_mempool_ring.a 00:02:57.190 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:58.125 [225/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.700 [226/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.980 [227/264] Linking target lib/librte_eal.so.24.0 00:02:58.980 [228/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:58.981 [229/264] Linking target lib/librte_ring.so.24.0 00:02:58.981 [230/264] Linking target lib/librte_meter.so.24.0 00:02:58.981 [231/264] Linking target lib/librte_pci.so.24.0 00:02:58.981 [232/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:58.981 [233/264] Linking target lib/librte_timer.so.24.0 00:02:58.981 [234/264] Linking target lib/librte_dmadev.so.24.0 00:02:59.245 [235/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:59.245 [236/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:59.245 [237/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:59.245 [238/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:59.245 [239/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:59.245 [240/264] Linking target lib/librte_rcu.so.24.0 00:02:59.245 [241/264] Linking target lib/librte_mempool.so.24.0 00:02:59.245 [242/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:59.245 [243/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:59.245 [244/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:59.503 [245/264] Linking target lib/librte_mbuf.so.24.0 00:02:59.503 [246/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:59.503 [247/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:59.503 [248/264] Linking target lib/librte_compressdev.so.24.0 00:02:59.503 [249/264] Linking target lib/librte_reorder.so.24.0 00:02:59.503 [250/264] Linking target lib/librte_net.so.24.0 00:02:59.503 [251/264] Linking target lib/librte_cryptodev.so.24.0 00:02:59.761 [252/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:59.761 [253/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:59.761 [254/264] Linking target lib/librte_security.so.24.0 00:02:59.761 [255/264] Linking target lib/librte_hash.so.24.0 00:02:59.761 [256/264] Linking target lib/librte_cmdline.so.24.0 00:02:59.761 [257/264] Linking target lib/librte_ethdev.so.24.0 00:03:00.019 [258/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:00.019 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:00.019 [260/264] Linking target lib/librte_power.so.24.0 00:03:02.585 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:02.585 [262/264] Linking static target lib/librte_vhost.a 00:03:03.962 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.962 [264/264] Linking target lib/librte_vhost.so.24.0 00:03:03.962 INFO: autodetecting backend as ninja 00:03:03.962 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:04.898 CC lib/log/log_flags.o 00:03:04.898 CC lib/ut/ut.o 00:03:04.898 CC lib/log/log_deprecated.o 00:03:04.898 CC lib/log/log.o 00:03:04.898 CC lib/ut_mock/mock.o 00:03:04.898 LIB libspdk_ut_mock.a 00:03:04.898 LIB libspdk_log.a 00:03:04.898 LIB libspdk_ut.a 00:03:05.157 CC lib/util/base64.o 00:03:05.157 CC lib/util/bit_array.o 00:03:05.157 CC lib/util/cpuset.o 00:03:05.157 CC lib/util/crc16.o 00:03:05.157 CC lib/util/crc32.o 00:03:05.157 CC lib/ioat/ioat.o 00:03:05.157 CC lib/util/crc32c.o 00:03:05.157 CC lib/dma/dma.o 00:03:05.157 CXX lib/trace_parser/trace.o 00:03:05.157 CC lib/vfio_user/host/vfio_user_pci.o 00:03:05.157 CC lib/vfio_user/host/vfio_user.o 00:03:05.157 CC lib/util/crc32_ieee.o 00:03:05.157 CC lib/util/crc64.o 00:03:05.157 LIB libspdk_dma.a 00:03:05.157 CC lib/util/dif.o 00:03:05.157 CC lib/util/fd.o 00:03:05.416 CC lib/util/file.o 00:03:05.416 CC lib/util/hexlify.o 00:03:05.416 CC lib/util/iov.o 00:03:05.416 CC lib/util/math.o 00:03:05.416 LIB libspdk_ioat.a 00:03:05.416 CC lib/util/pipe.o 00:03:05.416 CC lib/util/strerror_tls.o 00:03:05.416 CC lib/util/string.o 00:03:05.416 CC lib/util/uuid.o 00:03:05.416 LIB libspdk_vfio_user.a 00:03:05.416 CC lib/util/fd_group.o 00:03:05.416 CC lib/util/xor.o 00:03:05.674 CC lib/util/zipf.o 00:03:05.932 LIB libspdk_util.a 00:03:06.191 CC lib/json/json_parse.o 00:03:06.191 CC lib/idxd/idxd_user.o 00:03:06.191 CC lib/idxd/idxd.o 00:03:06.191 CC lib/json/json_write.o 00:03:06.191 CC lib/json/json_util.o 00:03:06.191 CC lib/env_dpdk/env.o 00:03:06.191 CC lib/vmd/vmd.o 00:03:06.191 CC lib/rdma/common.o 00:03:06.191 CC lib/conf/conf.o 00:03:06.191 LIB libspdk_conf.a 00:03:06.191 CC lib/vmd/led.o 00:03:06.450 CC lib/env_dpdk/memory.o 00:03:06.450 CC lib/env_dpdk/pci.o 00:03:06.450 CC lib/env_dpdk/init.o 00:03:06.450 LIB libspdk_trace_parser.a 00:03:06.450 CC lib/rdma/rdma_verbs.o 00:03:06.450 CC lib/env_dpdk/threads.o 00:03:06.450 LIB libspdk_json.a 00:03:06.450 CC lib/env_dpdk/pci_ioat.o 00:03:06.450 CC lib/env_dpdk/pci_virtio.o 00:03:06.722 CC lib/env_dpdk/pci_vmd.o 00:03:06.722 CC lib/env_dpdk/pci_idxd.o 00:03:06.722 LIB libspdk_rdma.a 00:03:06.722 CC lib/jsonrpc/jsonrpc_server.o 00:03:06.722 CC lib/env_dpdk/pci_event.o 00:03:06.722 CC lib/env_dpdk/sigbus_handler.o 00:03:06.722 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:06.722 CC lib/env_dpdk/pci_dpdk.o 00:03:06.722 CC lib/jsonrpc/jsonrpc_client.o 00:03:06.722 LIB libspdk_idxd.a 00:03:06.722 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:06.722 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:06.996 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:06.996 LIB libspdk_vmd.a 00:03:06.996 LIB libspdk_jsonrpc.a 00:03:07.255 CC lib/rpc/rpc.o 00:03:07.255 LIB libspdk_rpc.a 00:03:07.515 CC lib/trace/trace.o 00:03:07.515 CC lib/trace/trace_flags.o 00:03:07.515 CC lib/trace/trace_rpc.o 00:03:07.515 CC lib/notify/notify.o 00:03:07.515 CC lib/notify/notify_rpc.o 00:03:07.515 CC lib/sock/sock.o 00:03:07.515 CC lib/sock/sock_rpc.o 00:03:07.515 LIB libspdk_notify.a 00:03:07.515 LIB libspdk_env_dpdk.a 00:03:07.774 LIB libspdk_trace.a 00:03:07.774 CC lib/thread/thread.o 00:03:07.774 CC lib/thread/iobuf.o 00:03:07.774 LIB libspdk_sock.a 00:03:08.033 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:08.033 CC lib/nvme/nvme_ctrlr.o 00:03:08.033 CC lib/nvme/nvme_fabric.o 00:03:08.033 CC lib/nvme/nvme_ns_cmd.o 00:03:08.033 CC lib/nvme/nvme_pcie_common.o 00:03:08.033 CC lib/nvme/nvme_ns.o 00:03:08.033 CC lib/nvme/nvme_pcie.o 00:03:08.033 CC lib/nvme/nvme_qpair.o 00:03:08.033 CC lib/nvme/nvme.o 00:03:08.602 CC lib/nvme/nvme_quirks.o 00:03:08.602 CC lib/nvme/nvme_transport.o 00:03:08.602 CC lib/nvme/nvme_discovery.o 00:03:08.602 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:08.861 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:08.861 CC lib/nvme/nvme_tcp.o 00:03:08.861 CC lib/nvme/nvme_opal.o 00:03:08.861 CC lib/nvme/nvme_io_msg.o 00:03:09.119 CC lib/nvme/nvme_poll_group.o 00:03:09.119 CC lib/nvme/nvme_zns.o 00:03:09.119 CC lib/nvme/nvme_cuse.o 00:03:09.119 CC lib/nvme/nvme_vfio_user.o 00:03:09.119 CC lib/nvme/nvme_rdma.o 00:03:09.687 LIB libspdk_thread.a 00:03:09.687 CC lib/blob/blobstore.o 00:03:09.687 CC lib/virtio/virtio_vhost_user.o 00:03:09.687 CC lib/virtio/virtio.o 00:03:09.687 CC lib/blob/request.o 00:03:09.687 CC lib/accel/accel.o 00:03:09.687 CC lib/init/json_config.o 00:03:09.946 CC lib/init/subsystem.o 00:03:09.946 CC lib/blob/zeroes.o 00:03:10.204 CC lib/init/subsystem_rpc.o 00:03:10.204 CC lib/virtio/virtio_vfio_user.o 00:03:10.204 CC lib/virtio/virtio_pci.o 00:03:10.204 CC lib/blob/blob_bs_dev.o 00:03:10.204 CC lib/init/rpc.o 00:03:10.204 CC lib/accel/accel_rpc.o 00:03:10.463 CC lib/accel/accel_sw.o 00:03:10.463 LIB libspdk_init.a 00:03:10.463 LIB libspdk_virtio.a 00:03:10.463 CC lib/event/app.o 00:03:10.463 CC lib/event/reactor.o 00:03:10.463 CC lib/event/log_rpc.o 00:03:10.463 CC lib/event/app_rpc.o 00:03:10.463 CC lib/event/scheduler_static.o 00:03:10.463 LIB libspdk_nvme.a 00:03:11.029 LIB libspdk_accel.a 00:03:11.029 LIB libspdk_event.a 00:03:11.029 CC lib/bdev/bdev_rpc.o 00:03:11.029 CC lib/bdev/bdev.o 00:03:11.029 CC lib/bdev/bdev_zone.o 00:03:11.029 CC lib/bdev/part.o 00:03:11.029 CC lib/bdev/scsi_nvme.o 00:03:13.565 LIB libspdk_blob.a 00:03:13.565 CC lib/lvol/lvol.o 00:03:13.565 CC lib/blobfs/blobfs.o 00:03:13.565 CC lib/blobfs/tree.o 00:03:14.504 LIB libspdk_bdev.a 00:03:14.504 CC lib/nbd/nbd_rpc.o 00:03:14.504 CC lib/nbd/nbd.o 00:03:14.504 CC lib/scsi/dev.o 00:03:14.504 CC lib/scsi/lun.o 00:03:14.504 CC lib/scsi/port.o 00:03:14.504 CC lib/scsi/scsi.o 00:03:14.504 CC lib/ftl/ftl_core.o 00:03:14.504 CC lib/nvmf/ctrlr.o 00:03:14.763 LIB libspdk_blobfs.a 00:03:14.763 CC lib/nvmf/ctrlr_discovery.o 00:03:14.763 LIB libspdk_lvol.a 00:03:14.763 CC lib/ftl/ftl_init.o 00:03:14.763 CC lib/nvmf/ctrlr_bdev.o 00:03:14.763 CC lib/scsi/scsi_bdev.o 00:03:14.763 CC lib/scsi/scsi_pr.o 00:03:15.023 CC lib/ftl/ftl_layout.o 00:03:15.023 CC lib/scsi/scsi_rpc.o 00:03:15.023 CC lib/scsi/task.o 00:03:15.023 CC lib/nvmf/subsystem.o 00:03:15.023 CC lib/nvmf/nvmf.o 00:03:15.023 LIB libspdk_nbd.a 00:03:15.023 CC lib/nvmf/nvmf_rpc.o 00:03:15.282 CC lib/ftl/ftl_debug.o 00:03:15.282 CC lib/ftl/ftl_io.o 00:03:15.282 CC lib/nvmf/transport.o 00:03:15.282 CC lib/ftl/ftl_sb.o 00:03:15.282 LIB libspdk_scsi.a 00:03:15.541 CC lib/ftl/ftl_l2p.o 00:03:15.541 CC lib/ftl/ftl_l2p_flat.o 00:03:15.541 CC lib/iscsi/conn.o 00:03:15.541 CC lib/vhost/vhost.o 00:03:15.541 CC lib/nvmf/tcp.o 00:03:15.541 CC lib/nvmf/rdma.o 00:03:15.800 CC lib/ftl/ftl_nv_cache.o 00:03:16.059 CC lib/vhost/vhost_rpc.o 00:03:16.059 CC lib/ftl/ftl_band.o 00:03:16.059 CC lib/vhost/vhost_scsi.o 00:03:16.059 CC lib/vhost/vhost_blk.o 00:03:16.319 CC lib/iscsi/init_grp.o 00:03:16.319 CC lib/iscsi/iscsi.o 00:03:16.578 CC lib/iscsi/md5.o 00:03:16.578 CC lib/ftl/ftl_band_ops.o 00:03:16.578 CC lib/vhost/rte_vhost_user.o 00:03:16.578 CC lib/iscsi/param.o 00:03:16.837 CC lib/iscsi/portal_grp.o 00:03:16.837 CC lib/ftl/ftl_writer.o 00:03:16.837 CC lib/ftl/ftl_rq.o 00:03:16.837 CC lib/ftl/ftl_reloc.o 00:03:16.837 CC lib/ftl/ftl_l2p_cache.o 00:03:17.096 CC lib/iscsi/tgt_node.o 00:03:17.096 CC lib/ftl/ftl_p2l.o 00:03:17.096 CC lib/ftl/ftl_trace.o 00:03:17.096 CC lib/iscsi/iscsi_subsystem.o 00:03:17.355 CC lib/iscsi/iscsi_rpc.o 00:03:17.355 CC lib/ftl/mngt/ftl_mngt.o 00:03:17.355 CC lib/iscsi/task.o 00:03:17.615 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:17.615 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:17.615 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:17.615 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:17.615 LIB libspdk_vhost.a 00:03:17.615 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:17.615 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:17.615 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:17.615 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:17.615 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:17.615 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:17.615 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:17.874 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:17.874 CC lib/ftl/utils/ftl_conf.o 00:03:17.874 CC lib/ftl/utils/ftl_md.o 00:03:17.874 CC lib/ftl/utils/ftl_mempool.o 00:03:17.874 CC lib/ftl/utils/ftl_bitmap.o 00:03:17.874 LIB libspdk_iscsi.a 00:03:17.874 CC lib/ftl/utils/ftl_property.o 00:03:17.874 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:17.874 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:18.133 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:18.133 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:18.133 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:18.133 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:18.133 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:18.133 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:18.133 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:18.133 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:18.133 CC lib/ftl/base/ftl_base_dev.o 00:03:18.133 LIB libspdk_nvmf.a 00:03:18.133 CC lib/ftl/base/ftl_base_bdev.o 00:03:18.758 LIB libspdk_ftl.a 00:03:19.017 CC module/env_dpdk/env_dpdk_rpc.o 00:03:19.017 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:19.017 CC module/accel/ioat/accel_ioat.o 00:03:19.017 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:19.017 CC module/blob/bdev/blob_bdev.o 00:03:19.017 CC module/scheduler/gscheduler/gscheduler.o 00:03:19.017 CC module/sock/posix/posix.o 00:03:19.017 CC module/accel/dsa/accel_dsa.o 00:03:19.017 CC module/accel/error/accel_error.o 00:03:19.017 CC module/accel/iaa/accel_iaa.o 00:03:19.017 LIB libspdk_env_dpdk_rpc.a 00:03:19.017 CC module/accel/iaa/accel_iaa_rpc.o 00:03:19.017 LIB libspdk_scheduler_gscheduler.a 00:03:19.017 LIB libspdk_scheduler_dpdk_governor.a 00:03:19.017 CC module/accel/dsa/accel_dsa_rpc.o 00:03:19.017 CC module/accel/ioat/accel_ioat_rpc.o 00:03:19.277 CC module/accel/error/accel_error_rpc.o 00:03:19.277 LIB libspdk_scheduler_dynamic.a 00:03:19.277 LIB libspdk_accel_iaa.a 00:03:19.277 LIB libspdk_blob_bdev.a 00:03:19.277 LIB libspdk_accel_dsa.a 00:03:19.277 LIB libspdk_accel_ioat.a 00:03:19.277 LIB libspdk_accel_error.a 00:03:19.277 CC module/blobfs/bdev/blobfs_bdev.o 00:03:19.277 CC module/bdev/delay/vbdev_delay.o 00:03:19.277 CC module/bdev/lvol/vbdev_lvol.o 00:03:19.277 CC module/bdev/error/vbdev_error.o 00:03:19.277 CC module/bdev/gpt/gpt.o 00:03:19.277 CC module/bdev/malloc/bdev_malloc.o 00:03:19.277 CC module/bdev/null/bdev_null.o 00:03:19.277 CC module/bdev/nvme/bdev_nvme.o 00:03:19.536 CC module/bdev/passthru/vbdev_passthru.o 00:03:19.536 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:19.536 CC module/bdev/gpt/vbdev_gpt.o 00:03:19.795 CC module/bdev/null/bdev_null_rpc.o 00:03:19.795 CC module/bdev/error/vbdev_error_rpc.o 00:03:19.795 LIB libspdk_blobfs_bdev.a 00:03:19.795 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:19.795 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:19.795 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:19.795 LIB libspdk_sock_posix.a 00:03:19.795 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:19.795 LIB libspdk_bdev_error.a 00:03:19.795 CC module/bdev/nvme/nvme_rpc.o 00:03:19.795 LIB libspdk_bdev_null.a 00:03:19.795 CC module/bdev/nvme/bdev_mdns_client.o 00:03:19.795 LIB libspdk_bdev_gpt.a 00:03:19.795 CC module/bdev/nvme/vbdev_opal.o 00:03:19.795 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:20.054 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:20.054 LIB libspdk_bdev_passthru.a 00:03:20.054 LIB libspdk_bdev_delay.a 00:03:20.054 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:20.054 LIB libspdk_bdev_malloc.a 00:03:20.054 CC module/bdev/raid/bdev_raid.o 00:03:20.054 CC module/bdev/split/vbdev_split.o 00:03:20.054 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:20.054 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:20.054 CC module/bdev/split/vbdev_split_rpc.o 00:03:20.313 CC module/bdev/aio/bdev_aio.o 00:03:20.313 CC module/bdev/ftl/bdev_ftl.o 00:03:20.313 CC module/bdev/aio/bdev_aio_rpc.o 00:03:20.313 LIB libspdk_bdev_lvol.a 00:03:20.313 LIB libspdk_bdev_split.a 00:03:20.313 CC module/bdev/iscsi/bdev_iscsi.o 00:03:20.313 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:20.573 CC module/bdev/raid/bdev_raid_rpc.o 00:03:20.573 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:20.573 LIB libspdk_bdev_zone_block.a 00:03:20.573 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:20.573 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:20.573 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:20.573 CC module/bdev/raid/bdev_raid_sb.o 00:03:20.573 LIB libspdk_bdev_aio.a 00:03:20.573 CC module/bdev/raid/raid0.o 00:03:20.573 CC module/bdev/raid/raid1.o 00:03:20.832 CC module/bdev/raid/concat.o 00:03:20.832 CC module/bdev/raid/raid5f.o 00:03:20.832 LIB libspdk_bdev_ftl.a 00:03:20.832 LIB libspdk_bdev_iscsi.a 00:03:21.090 LIB libspdk_bdev_virtio.a 00:03:21.349 LIB libspdk_bdev_raid.a 00:03:22.286 LIB libspdk_bdev_nvme.a 00:03:22.546 CC module/event/subsystems/vmd/vmd.o 00:03:22.546 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:22.546 CC module/event/subsystems/sock/sock.o 00:03:22.546 CC module/event/subsystems/iobuf/iobuf.o 00:03:22.546 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:22.546 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:22.546 CC module/event/subsystems/scheduler/scheduler.o 00:03:22.805 LIB libspdk_event_sock.a 00:03:22.805 LIB libspdk_event_vhost_blk.a 00:03:22.805 LIB libspdk_event_vmd.a 00:03:22.805 LIB libspdk_event_scheduler.a 00:03:22.805 LIB libspdk_event_iobuf.a 00:03:22.805 CC module/event/subsystems/accel/accel.o 00:03:23.063 LIB libspdk_event_accel.a 00:03:23.063 CC module/event/subsystems/bdev/bdev.o 00:03:23.323 LIB libspdk_event_bdev.a 00:03:23.323 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:23.323 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:23.323 CC module/event/subsystems/nbd/nbd.o 00:03:23.323 CC module/event/subsystems/scsi/scsi.o 00:03:23.582 LIB libspdk_event_nbd.a 00:03:23.582 LIB libspdk_event_scsi.a 00:03:23.840 LIB libspdk_event_nvmf.a 00:03:23.840 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:23.840 CC module/event/subsystems/iscsi/iscsi.o 00:03:23.840 LIB libspdk_event_vhost_scsi.a 00:03:23.840 LIB libspdk_event_iscsi.a 00:03:24.099 CXX app/trace/trace.o 00:03:24.099 CC app/trace_record/trace_record.o 00:03:24.099 CC examples/ioat/perf/perf.o 00:03:24.099 CC examples/nvme/hello_world/hello_world.o 00:03:24.099 CC examples/sock/hello_world/hello_sock.o 00:03:24.099 CC examples/accel/perf/accel_perf.o 00:03:24.099 CC app/nvmf_tgt/nvmf_main.o 00:03:24.099 CC test/accel/dif/dif.o 00:03:24.099 CC examples/blob/hello_world/hello_blob.o 00:03:24.099 CC examples/bdev/hello_world/hello_bdev.o 00:03:24.412 LINK nvmf_tgt 00:03:24.412 LINK ioat_perf 00:03:24.412 LINK hello_blob 00:03:24.412 LINK hello_sock 00:03:24.412 LINK spdk_trace_record 00:03:24.412 LINK hello_bdev 00:03:24.412 LINK hello_world 00:03:24.671 LINK dif 00:03:24.671 LINK spdk_trace 00:03:24.671 LINK accel_perf 00:03:25.238 CC examples/ioat/verify/verify.o 00:03:25.238 CC examples/nvme/reconnect/reconnect.o 00:03:25.238 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:25.497 LINK verify 00:03:25.497 CC examples/nvme/arbitration/arbitration.o 00:03:25.497 LINK reconnect 00:03:25.755 CC examples/nvme/hotplug/hotplug.o 00:03:25.755 LINK nvme_manage 00:03:25.755 LINK arbitration 00:03:25.755 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:26.013 LINK hotplug 00:03:26.013 LINK cmb_copy 00:03:26.013 CC test/app/bdev_svc/bdev_svc.o 00:03:26.272 LINK bdev_svc 00:03:26.530 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:26.788 CC test/app/histogram_perf/histogram_perf.o 00:03:26.788 CC test/app/jsoncat/jsoncat.o 00:03:27.047 LINK histogram_perf 00:03:27.047 LINK jsoncat 00:03:27.047 LINK nvme_fuzz 00:03:27.305 CC examples/nvme/abort/abort.o 00:03:27.305 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:27.563 CC examples/blob/cli/blobcli.o 00:03:27.563 LINK abort 00:03:27.563 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:27.563 CC examples/bdev/bdevperf/bdevperf.o 00:03:27.821 CC test/app/stub/stub.o 00:03:27.821 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:27.821 LINK stub 00:03:27.821 CC app/iscsi_tgt/iscsi_tgt.o 00:03:28.080 LINK iscsi_tgt 00:03:28.080 CC test/bdev/bdevio/bdevio.o 00:03:28.080 LINK blobcli 00:03:28.337 LINK vhost_fuzz 00:03:28.337 CC test/blobfs/mkfs/mkfs.o 00:03:28.596 LINK mkfs 00:03:28.596 LINK bdevperf 00:03:28.596 LINK bdevio 00:03:28.596 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:28.854 LINK pmr_persistence 00:03:29.421 TEST_HEADER include/spdk/rpc.h 00:03:29.421 TEST_HEADER include/spdk/accel_module.h 00:03:29.421 TEST_HEADER include/spdk/bit_pool.h 00:03:29.421 TEST_HEADER include/spdk/nvmf.h 00:03:29.421 TEST_HEADER include/spdk/blobfs.h 00:03:29.421 TEST_HEADER include/spdk/notify.h 00:03:29.421 TEST_HEADER include/spdk/pipe.h 00:03:29.421 LINK iscsi_fuzz 00:03:29.421 TEST_HEADER include/spdk/accel.h 00:03:29.421 TEST_HEADER include/spdk/mmio.h 00:03:29.421 TEST_HEADER include/spdk/version.h 00:03:29.421 TEST_HEADER include/spdk/trace_parser.h 00:03:29.421 TEST_HEADER include/spdk/opal_spec.h 00:03:29.421 TEST_HEADER include/spdk/uuid.h 00:03:29.421 TEST_HEADER include/spdk/fd.h 00:03:29.421 TEST_HEADER include/spdk/likely.h 00:03:29.421 TEST_HEADER include/spdk/memory.h 00:03:29.421 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:29.421 TEST_HEADER include/spdk/dma.h 00:03:29.421 TEST_HEADER include/spdk/bit_array.h 00:03:29.421 TEST_HEADER include/spdk/nbd.h 00:03:29.421 TEST_HEADER include/spdk/bdev.h 00:03:29.421 TEST_HEADER include/spdk/nvme_zns.h 00:03:29.421 TEST_HEADER include/spdk/bdev_module.h 00:03:29.421 TEST_HEADER include/spdk/env_dpdk.h 00:03:29.421 TEST_HEADER include/spdk/nvmf_spec.h 00:03:29.421 TEST_HEADER include/spdk/fd_group.h 00:03:29.421 TEST_HEADER include/spdk/json.h 00:03:29.421 TEST_HEADER include/spdk/zipf.h 00:03:29.421 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:29.421 TEST_HEADER include/spdk/base64.h 00:03:29.421 TEST_HEADER include/spdk/gpt_spec.h 00:03:29.421 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:29.421 TEST_HEADER include/spdk/config.h 00:03:29.421 TEST_HEADER include/spdk/crc32.h 00:03:29.421 TEST_HEADER include/spdk/barrier.h 00:03:29.421 TEST_HEADER include/spdk/scsi_spec.h 00:03:29.421 TEST_HEADER include/spdk/hexlify.h 00:03:29.421 TEST_HEADER include/spdk/blob.h 00:03:29.421 TEST_HEADER include/spdk/cpuset.h 00:03:29.421 TEST_HEADER include/spdk/thread.h 00:03:29.421 TEST_HEADER include/spdk/opal.h 00:03:29.421 TEST_HEADER include/spdk/blob_bdev.h 00:03:29.421 TEST_HEADER include/spdk/xor.h 00:03:29.421 TEST_HEADER include/spdk/assert.h 00:03:29.421 TEST_HEADER include/spdk/nvme_spec.h 00:03:29.421 TEST_HEADER include/spdk/endian.h 00:03:29.421 TEST_HEADER include/spdk/tree.h 00:03:29.421 TEST_HEADER include/spdk/util.h 00:03:29.421 TEST_HEADER include/spdk/log.h 00:03:29.421 TEST_HEADER include/spdk/sock.h 00:03:29.421 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:29.421 TEST_HEADER include/spdk/ftl.h 00:03:29.421 TEST_HEADER include/spdk/vhost.h 00:03:29.421 TEST_HEADER include/spdk/crc64.h 00:03:29.421 TEST_HEADER include/spdk/nvme_intel.h 00:03:29.421 TEST_HEADER include/spdk/idxd_spec.h 00:03:29.421 TEST_HEADER include/spdk/crc16.h 00:03:29.421 TEST_HEADER include/spdk/bdev_zone.h 00:03:29.421 TEST_HEADER include/spdk/stdinc.h 00:03:29.421 TEST_HEADER include/spdk/scsi.h 00:03:29.421 TEST_HEADER include/spdk/trace.h 00:03:29.421 TEST_HEADER include/spdk/file.h 00:03:29.421 TEST_HEADER include/spdk/reduce.h 00:03:29.421 TEST_HEADER include/spdk/event.h 00:03:29.421 TEST_HEADER include/spdk/init.h 00:03:29.421 TEST_HEADER include/spdk/nvmf_transport.h 00:03:29.421 TEST_HEADER include/spdk/idxd.h 00:03:29.421 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:29.421 TEST_HEADER include/spdk/nvme.h 00:03:29.421 TEST_HEADER include/spdk/iscsi_spec.h 00:03:29.421 TEST_HEADER include/spdk/queue.h 00:03:29.421 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:29.421 TEST_HEADER include/spdk/lvol.h 00:03:29.421 TEST_HEADER include/spdk/histogram_data.h 00:03:29.421 TEST_HEADER include/spdk/env.h 00:03:29.421 TEST_HEADER include/spdk/ioat_spec.h 00:03:29.421 TEST_HEADER include/spdk/conf.h 00:03:29.421 TEST_HEADER include/spdk/ublk.h 00:03:29.421 TEST_HEADER include/spdk/dif.h 00:03:29.421 TEST_HEADER include/spdk/pci_ids.h 00:03:29.421 TEST_HEADER include/spdk/scheduler.h 00:03:29.421 TEST_HEADER include/spdk/string.h 00:03:29.421 TEST_HEADER include/spdk/jsonrpc.h 00:03:29.421 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:29.421 TEST_HEADER include/spdk/vmd.h 00:03:29.421 TEST_HEADER include/spdk/ioat.h 00:03:29.421 CXX test/cpp_headers/rpc.o 00:03:29.421 CC test/dma/test_dma/test_dma.o 00:03:29.421 CC test/env/mem_callbacks/mem_callbacks.o 00:03:29.421 CXX test/cpp_headers/accel_module.o 00:03:29.680 CXX test/cpp_headers/bit_pool.o 00:03:29.946 LINK test_dma 00:03:29.946 CXX test/cpp_headers/nvmf.o 00:03:29.946 LINK mem_callbacks 00:03:29.946 CXX test/cpp_headers/blobfs.o 00:03:29.946 CXX test/cpp_headers/notify.o 00:03:30.205 CXX test/cpp_headers/pipe.o 00:03:30.205 CC test/env/vtophys/vtophys.o 00:03:30.205 CC test/event/event_perf/event_perf.o 00:03:30.463 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:30.463 CXX test/cpp_headers/accel.o 00:03:30.463 LINK vtophys 00:03:30.463 LINK event_perf 00:03:30.463 LINK env_dpdk_post_init 00:03:30.463 CXX test/cpp_headers/mmio.o 00:03:30.722 CXX test/cpp_headers/version.o 00:03:30.722 CXX test/cpp_headers/trace_parser.o 00:03:30.981 CXX test/cpp_headers/opal_spec.o 00:03:31.259 CXX test/cpp_headers/uuid.o 00:03:31.259 CC test/lvol/esnap/esnap.o 00:03:31.259 CC test/event/reactor/reactor.o 00:03:31.259 CC test/nvme/aer/aer.o 00:03:31.259 CXX test/cpp_headers/fd.o 00:03:31.537 LINK reactor 00:03:31.537 CXX test/cpp_headers/likely.o 00:03:31.795 CXX test/cpp_headers/memory.o 00:03:31.795 LINK aer 00:03:31.795 CC test/env/memory/memory_ut.o 00:03:31.795 CXX test/cpp_headers/vfio_user_pci.o 00:03:31.795 CC test/rpc_client/rpc_client_test.o 00:03:31.795 CC examples/vmd/lsvmd/lsvmd.o 00:03:31.795 CC test/nvme/reset/reset.o 00:03:32.054 CXX test/cpp_headers/dma.o 00:03:32.054 LINK lsvmd 00:03:32.054 LINK rpc_client_test 00:03:32.313 CXX test/cpp_headers/bit_array.o 00:03:32.313 LINK reset 00:03:32.313 CC test/event/reactor_perf/reactor_perf.o 00:03:32.313 CXX test/cpp_headers/nbd.o 00:03:32.313 CXX test/cpp_headers/bdev.o 00:03:32.313 CC app/spdk_tgt/spdk_tgt.o 00:03:32.313 LINK reactor_perf 00:03:32.573 CXX test/cpp_headers/nvme_zns.o 00:03:32.573 LINK spdk_tgt 00:03:32.573 LINK memory_ut 00:03:32.573 CC app/spdk_lspci/spdk_lspci.o 00:03:32.832 CC test/nvme/sgl/sgl.o 00:03:32.832 CXX test/cpp_headers/bdev_module.o 00:03:32.832 LINK spdk_lspci 00:03:33.091 CC test/env/pci/pci_ut.o 00:03:33.091 CXX test/cpp_headers/env_dpdk.o 00:03:33.091 LINK sgl 00:03:33.349 CC examples/vmd/led/led.o 00:03:33.349 CXX test/cpp_headers/nvmf_spec.o 00:03:33.349 CC test/event/app_repeat/app_repeat.o 00:03:33.608 CC test/event/scheduler/scheduler.o 00:03:33.608 CC examples/nvmf/nvmf/nvmf.o 00:03:33.608 LINK pci_ut 00:03:33.608 CXX test/cpp_headers/fd_group.o 00:03:33.608 LINK led 00:03:33.608 LINK app_repeat 00:03:33.608 CXX test/cpp_headers/json.o 00:03:33.867 LINK scheduler 00:03:33.867 CXX test/cpp_headers/zipf.o 00:03:33.867 LINK nvmf 00:03:33.867 CC examples/util/zipf/zipf.o 00:03:34.126 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:34.126 CC examples/thread/thread/thread_ex.o 00:03:34.126 LINK zipf 00:03:34.126 CC test/nvme/e2edp/nvme_dp.o 00:03:34.126 CXX test/cpp_headers/base64.o 00:03:34.386 LINK thread 00:03:34.386 CXX test/cpp_headers/gpt_spec.o 00:03:34.386 LINK nvme_dp 00:03:34.645 CXX test/cpp_headers/blobfs_bdev.o 00:03:34.645 CC examples/idxd/perf/perf.o 00:03:34.645 CC test/thread/poller_perf/poller_perf.o 00:03:34.645 CXX test/cpp_headers/config.o 00:03:34.645 CXX test/cpp_headers/crc32.o 00:03:34.905 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:34.905 LINK poller_perf 00:03:34.905 CXX test/cpp_headers/barrier.o 00:03:34.905 LINK interrupt_tgt 00:03:35.163 CXX test/cpp_headers/scsi_spec.o 00:03:35.163 LINK idxd_perf 00:03:35.163 CXX test/cpp_headers/hexlify.o 00:03:35.422 CXX test/cpp_headers/blob.o 00:03:35.422 CXX test/cpp_headers/cpuset.o 00:03:35.682 CXX test/cpp_headers/thread.o 00:03:35.682 CC test/nvme/overhead/overhead.o 00:03:35.682 CC test/thread/lock/spdk_lock.o 00:03:35.682 CXX test/cpp_headers/opal.o 00:03:35.682 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:35.941 LINK overhead 00:03:35.941 CXX test/cpp_headers/blob_bdev.o 00:03:35.941 LINK histogram_ut 00:03:36.200 CXX test/cpp_headers/xor.o 00:03:36.200 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:36.459 CXX test/cpp_headers/assert.o 00:03:36.459 CXX test/cpp_headers/nvme_spec.o 00:03:36.719 CXX test/cpp_headers/endian.o 00:03:36.719 CC app/spdk_nvme_perf/perf.o 00:03:36.719 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:36.977 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:36.977 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:36.977 CXX test/cpp_headers/tree.o 00:03:36.977 CXX test/cpp_headers/util.o 00:03:36.977 CC test/nvme/err_injection/err_injection.o 00:03:37.237 CXX test/cpp_headers/log.o 00:03:37.237 LINK esnap 00:03:37.237 LINK err_injection 00:03:37.237 CXX test/cpp_headers/sock.o 00:03:37.496 LINK blob_bdev_ut 00:03:37.496 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:37.496 LINK spdk_lock 00:03:37.756 CXX test/cpp_headers/ftl.o 00:03:37.756 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:37.756 CC test/nvme/startup/startup.o 00:03:37.756 LINK spdk_nvme_perf 00:03:38.015 CXX test/cpp_headers/vhost.o 00:03:38.015 LINK startup 00:03:38.015 CXX test/cpp_headers/crc64.o 00:03:38.275 CXX test/cpp_headers/nvme_intel.o 00:03:38.275 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:38.275 CC app/spdk_nvme_identify/identify.o 00:03:38.534 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:38.534 CXX test/cpp_headers/idxd_spec.o 00:03:38.534 LINK scsi_nvme_ut 00:03:38.793 CXX test/cpp_headers/crc16.o 00:03:39.052 LINK accel_ut 00:03:39.052 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:39.052 CC test/nvme/reserve/reserve.o 00:03:39.052 CXX test/cpp_headers/bdev_zone.o 00:03:39.052 LINK gpt_ut 00:03:39.311 CXX test/cpp_headers/stdinc.o 00:03:39.311 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:39.311 CXX test/cpp_headers/scsi.o 00:03:39.311 LINK reserve 00:03:39.311 CXX test/cpp_headers/trace.o 00:03:39.311 LINK spdk_nvme_identify 00:03:39.570 CC test/nvme/simple_copy/simple_copy.o 00:03:39.570 CC test/nvme/connect_stress/connect_stress.o 00:03:39.570 CXX test/cpp_headers/file.o 00:03:39.830 LINK connect_stress 00:03:39.830 CXX test/cpp_headers/reduce.o 00:03:39.830 LINK simple_copy 00:03:39.830 CXX test/cpp_headers/event.o 00:03:40.089 CXX test/cpp_headers/init.o 00:03:40.089 LINK vbdev_lvol_ut 00:03:40.089 CXX test/cpp_headers/nvmf_transport.o 00:03:40.348 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:40.348 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:40.348 CXX test/cpp_headers/idxd.o 00:03:40.607 CC app/spdk_nvme_discover/discovery_aer.o 00:03:40.607 CXX test/cpp_headers/vfio_user_spec.o 00:03:40.607 LINK tree_ut 00:03:40.607 CXX test/cpp_headers/nvme.o 00:03:40.607 LINK spdk_nvme_discover 00:03:40.866 CXX test/cpp_headers/iscsi_spec.o 00:03:40.866 LINK part_ut 00:03:40.866 CC app/spdk_top/spdk_top.o 00:03:40.866 CC test/nvme/boot_partition/boot_partition.o 00:03:40.866 CC test/nvme/compliance/nvme_compliance.o 00:03:40.866 CXX test/cpp_headers/queue.o 00:03:40.866 CXX test/cpp_headers/nvmf_cmd.o 00:03:41.125 LINK boot_partition 00:03:41.125 CC test/nvme/fused_ordering/fused_ordering.o 00:03:41.125 CXX test/cpp_headers/lvol.o 00:03:41.384 LINK nvme_compliance 00:03:41.384 CXX test/cpp_headers/histogram_data.o 00:03:41.384 LINK fused_ordering 00:03:41.643 CXX test/cpp_headers/env.o 00:03:41.643 CXX test/cpp_headers/ioat_spec.o 00:03:41.902 LINK blobfs_async_ut 00:03:41.902 LINK spdk_top 00:03:41.902 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:41.902 CXX test/cpp_headers/conf.o 00:03:42.161 CXX test/cpp_headers/ublk.o 00:03:42.161 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:42.161 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:42.161 CXX test/cpp_headers/dif.o 00:03:42.161 LINK dma_ut 00:03:42.161 CXX test/cpp_headers/pci_ids.o 00:03:42.419 LINK doorbell_aers 00:03:42.420 CXX test/cpp_headers/scheduler.o 00:03:42.420 CXX test/cpp_headers/string.o 00:03:42.420 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:42.420 CC app/vhost/vhost.o 00:03:42.420 CC app/spdk_dd/spdk_dd.o 00:03:42.678 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:42.678 CXX test/cpp_headers/jsonrpc.o 00:03:42.678 LINK bdev_ut 00:03:42.679 LINK vhost 00:03:42.679 CXX test/cpp_headers/nvme_ocssd.o 00:03:42.937 LINK spdk_dd 00:03:42.937 LINK bdev_zone_ut 00:03:42.937 CXX test/cpp_headers/vmd.o 00:03:42.937 CXX test/cpp_headers/ioat.o 00:03:43.196 CC test/unit/lib/event/app.c/app_ut.o 00:03:43.196 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:43.196 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:43.196 LINK bdev_ut 00:03:43.455 CC test/nvme/fdp/fdp.o 00:03:43.455 LINK blobfs_sync_ut 00:03:43.761 LINK ioat_ut 00:03:43.761 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:43.761 LINK fdp 00:03:43.761 LINK app_ut 00:03:43.761 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:43.761 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:44.020 LINK blobfs_bdev_ut 00:03:44.020 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:44.279 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:44.279 LINK bdev_raid_sb_ut 00:03:44.279 LINK conn_ut 00:03:44.537 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:44.537 LINK bdev_raid_ut 00:03:44.796 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:44.796 CC test/nvme/cuse/cuse.o 00:03:44.796 LINK concat_ut 00:03:44.796 CC test/unit/lib/log/log.c/log_ut.o 00:03:45.054 LINK reactor_ut 00:03:45.054 LINK jsonrpc_server_ut 00:03:45.055 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:45.055 LINK init_grp_ut 00:03:45.313 LINK log_ut 00:03:45.313 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:03:45.313 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:45.313 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:45.572 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:45.572 LINK raid1_ut 00:03:45.572 CC app/fio/nvme/fio_plugin.o 00:03:45.572 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:45.831 LINK cuse 00:03:45.831 LINK blob_ut 00:03:45.831 LINK notify_ut 00:03:45.831 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:45.831 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:46.090 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:46.090 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:46.348 LINK json_util_ut 00:03:46.348 LINK spdk_nvme 00:03:46.348 LINK raid5f_ut 00:03:46.348 LINK param_ut 00:03:46.348 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:46.607 LINK json_parse_ut 00:03:46.607 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:46.607 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:46.607 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:47.174 LINK lvol_ut 00:03:47.174 LINK nvme_ut 00:03:47.174 CC app/fio/bdev/fio_plugin.o 00:03:47.433 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:47.433 LINK nvme_ctrlr_cmd_ut 00:03:47.433 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:47.433 LINK json_write_ut 00:03:47.433 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:47.433 LINK vbdev_zone_block_ut 00:03:47.433 LINK nvme_ns_ut 00:03:47.692 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:47.692 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:47.692 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:47.692 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:47.692 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:47.951 LINK spdk_bdev 00:03:47.951 LINK iscsi_ut 00:03:47.951 LINK portal_grp_ut 00:03:48.211 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:48.470 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:48.470 LINK nvme_poll_group_ut 00:03:48.727 LINK dev_ut 00:03:48.727 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:49.003 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:49.003 LINK nvme_ns_ocssd_cmd_ut 00:03:49.269 LINK tgt_node_ut 00:03:49.269 LINK nvme_pcie_ut 00:03:49.269 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:49.527 LINK nvme_ns_cmd_ut 00:03:49.527 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:49.527 LINK nvme_ctrlr_ut 00:03:49.527 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:49.527 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:49.785 LINK lun_ut 00:03:49.785 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:49.785 LINK nvme_quirks_ut 00:03:50.044 LINK scsi_ut 00:03:50.044 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:50.044 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:50.044 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:50.302 LINK iobuf_ut 00:03:50.302 LINK sock_ut 00:03:50.302 LINK base64_ut 00:03:50.560 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:50.560 LINK nvme_qpair_ut 00:03:50.560 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:50.560 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:50.560 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:50.819 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:50.819 LINK bit_array_ut 00:03:51.077 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:51.336 LINK scsi_bdev_ut 00:03:51.336 LINK cpuset_ut 00:03:51.595 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:51.595 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:51.595 LINK crc16_ut 00:03:51.595 LINK tcp_ut 00:03:51.595 LINK posix_ut 00:03:51.853 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:51.853 LINK thread_ut 00:03:51.853 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:51.853 LINK crc32_ieee_ut 00:03:52.112 LINK scsi_pr_ut 00:03:52.112 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:52.112 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:52.112 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:52.371 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:52.371 LINK crc32c_ut 00:03:52.371 LINK crc64_ut 00:03:52.371 LINK pci_event_ut 00:03:52.371 LINK bdev_nvme_ut 00:03:52.371 LINK ctrlr_discovery_ut 00:03:52.630 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:52.630 LINK subsystem_ut 00:03:52.630 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:52.630 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:52.630 LINK subsystem_ut 00:03:52.888 LINK nvme_transport_ut 00:03:52.888 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:03:52.888 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:03:52.888 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:52.888 LINK rpc_ut 00:03:53.147 LINK idxd_user_ut 00:03:53.147 LINK ctrlr_ut 00:03:53.147 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:03:53.147 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:03:53.147 LINK rpc_ut 00:03:53.147 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:03:53.405 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:53.405 LINK nvme_tcp_ut 00:03:53.405 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:03:53.405 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:53.664 LINK ftl_l2p_ut 00:03:53.664 LINK common_ut 00:03:53.664 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:53.664 LINK dif_ut 00:03:53.664 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:53.922 LINK ftl_bitmap_ut 00:03:53.922 LINK ftl_io_ut 00:03:53.922 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:54.181 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:54.181 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:54.181 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:54.440 LINK idxd_ut 00:03:54.440 LINK nvme_io_msg_ut 00:03:54.440 LINK iov_ut 00:03:54.440 LINK ftl_band_ut 00:03:54.440 LINK ctrlr_bdev_ut 00:03:54.699 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:03:54.699 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:03:54.699 LINK nvme_fabric_ut 00:03:54.699 CC test/unit/lib/util/math.c/math_ut.o 00:03:54.699 LINK vhost_ut 00:03:54.699 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:54.961 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:03:54.961 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:03:54.961 LINK math_ut 00:03:54.961 LINK ftl_mempool_ut 00:03:54.961 LINK nvme_opal_ut 00:03:54.961 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:54.961 CC test/unit/lib/util/string.c/string_ut.o 00:03:55.227 LINK nvme_pcie_common_ut 00:03:55.227 LINK ftl_mngt_ut 00:03:55.227 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:03:55.227 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:55.486 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:55.486 LINK string_ut 00:03:55.486 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:55.486 LINK xor_ut 00:03:55.486 LINK pipe_ut 00:03:56.053 LINK nvmf_ut 00:03:56.312 LINK ftl_layout_upgrade_ut 00:03:56.312 LINK ftl_sb_ut 00:03:56.312 LINK nvme_rdma_ut 00:03:56.879 LINK nvme_cuse_ut 00:03:58.255 LINK transport_ut 00:03:58.823 LINK rdma_ut 00:03:59.082 00:03:59.082 real 2m1.705s 00:03:59.082 user 10m36.848s 00:03:59.082 sys 1m54.433s 00:03:59.082 07:04:32 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:59.082 ************************************ 00:03:59.082 END TEST unittest_build 00:03:59.082 ************************************ 00:03:59.082 07:04:32 -- common/autotest_common.sh@10 -- $ set +x 00:03:59.082 07:04:32 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:59.082 07:04:32 -- nvmf/common.sh@7 -- # uname -s 00:03:59.082 07:04:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:59.082 07:04:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:59.082 07:04:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:59.082 07:04:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:59.082 07:04:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:59.082 07:04:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:59.082 07:04:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:59.082 07:04:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:59.082 07:04:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:59.082 07:04:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:59.082 07:04:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:71168b00-7f67-447c-92fa-6dfe04eb27cb 00:03:59.082 07:04:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=71168b00-7f67-447c-92fa-6dfe04eb27cb 00:03:59.082 07:04:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:59.082 07:04:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:59.082 07:04:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:59.082 07:04:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:59.082 07:04:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:59.082 07:04:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:59.082 07:04:32 -- nvmf/common.sh@46 -- # : 0 00:03:59.082 07:04:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:59.082 07:04:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:59.082 07:04:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:59.082 07:04:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:59.082 07:04:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:59.082 07:04:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:59.082 07:04:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:59.082 07:04:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:59.082 07:04:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:59.082 07:04:32 -- spdk/autotest.sh@32 -- # uname -s 00:03:59.082 07:04:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:59.082 07:04:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E' 00:03:59.082 07:04:32 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:59.082 07:04:32 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:59.082 07:04:32 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:59.082 07:04:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:59.649 07:04:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:59.649 07:04:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm 00:03:59.649 07:04:33 -- spdk/autotest.sh@48 -- # udevadm_pid=96425 00:03:59.650 07:04:33 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:59.650 07:04:33 -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property 00:03:59.650 07:04:33 -- spdk/autotest.sh@54 -- # echo 96459 00:03:59.650 07:04:33 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:59.650 07:04:33 -- spdk/autotest.sh@56 -- # echo 96515 00:03:59.650 07:04:33 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:59.650 07:04:33 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:59.650 07:04:33 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:59.650 07:04:33 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:59.650 07:04:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:59.650 07:04:33 -- common/autotest_common.sh@10 -- # set +x 00:03:59.650 07:04:33 -- spdk/autotest.sh@70 -- # create_test_list 00:03:59.650 07:04:33 -- common/autotest_common.sh@734 -- # xtrace_disable 00:03:59.650 07:04:33 -- common/autotest_common.sh@10 -- # set +x 00:03:59.908 07:04:33 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:59.908 07:04:33 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:59.908 07:04:33 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:59.908 07:04:33 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:59.908 07:04:33 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:59.908 07:04:33 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:59.908 07:04:33 -- common/autotest_common.sh@1438 -- # uname 00:03:59.908 07:04:33 -- common/autotest_common.sh@1438 -- # '[' Linux = FreeBSD ']' 00:03:59.908 07:04:33 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:59.908 07:04:33 -- common/autotest_common.sh@1458 -- # uname 00:03:59.908 07:04:33 -- common/autotest_common.sh@1458 -- # [[ Linux = FreeBSD ]] 00:03:59.908 07:04:33 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:59.908 07:04:33 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:59.908 07:04:33 -- spdk/autotest.sh@83 -- # hash lcov 00:03:59.908 07:04:33 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:59.908 07:04:33 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:59.908 --rc lcov_branch_coverage=1 00:03:59.908 --rc lcov_function_coverage=1 00:03:59.908 --rc genhtml_branch_coverage=1 00:03:59.908 --rc genhtml_function_coverage=1 00:03:59.908 --rc genhtml_legend=1 00:03:59.908 --rc geninfo_all_blocks=1 00:03:59.908 ' 00:03:59.908 07:04:33 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:59.908 --rc lcov_branch_coverage=1 00:03:59.908 --rc lcov_function_coverage=1 00:03:59.908 --rc genhtml_branch_coverage=1 00:03:59.908 --rc genhtml_function_coverage=1 00:03:59.908 --rc genhtml_legend=1 00:03:59.908 --rc geninfo_all_blocks=1 00:03:59.908 ' 00:03:59.908 07:04:33 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:59.908 --rc lcov_branch_coverage=1 00:03:59.908 --rc lcov_function_coverage=1 00:03:59.908 --rc genhtml_branch_coverage=1 00:03:59.908 --rc genhtml_function_coverage=1 00:03:59.908 --rc genhtml_legend=1 00:03:59.908 --rc geninfo_all_blocks=1 00:03:59.908 --no-external' 00:03:59.908 07:04:33 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:59.908 --rc lcov_branch_coverage=1 00:03:59.908 --rc lcov_function_coverage=1 00:03:59.908 --rc genhtml_branch_coverage=1 00:03:59.908 --rc genhtml_function_coverage=1 00:03:59.908 --rc genhtml_legend=1 00:03:59.908 --rc geninfo_all_blocks=1 00:03:59.908 --no-external' 00:03:59.908 07:04:33 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:59.908 lcov: LCOV version 1.14 00:03:59.908 07:04:33 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:06.480 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:06.480 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:06.481 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:06.481 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:06.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:06.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:06.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:06.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:06.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:06.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:06.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:06.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:06.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:06.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:06.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:06.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:06.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:06.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:06.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:06.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:06.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:06.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:06.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:06.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:06.482 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:06.482 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:07.858 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:07.858 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:08.117 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:08.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:08.117 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:08.117 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:13.392 07:04:46 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:13.392 07:04:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:13.392 07:04:46 -- common/autotest_common.sh@10 -- # set +x 00:04:13.392 07:04:46 -- spdk/autotest.sh@102 -- # rm -f 00:04:13.392 07:04:46 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:13.392 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:04:13.392 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:04:13.392 07:04:46 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:13.392 07:04:46 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:04:13.392 07:04:46 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:04:13.392 07:04:46 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:04:13.392 07:04:46 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:04:13.392 07:04:46 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:04:13.392 07:04:46 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:04:13.392 07:04:46 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:13.392 07:04:46 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:04:13.392 07:04:46 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:13.392 07:04:46 -- spdk/autotest.sh@121 -- # grep -v p 00:04:13.392 07:04:46 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:13.392 07:04:46 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:13.392 07:04:46 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:13.392 07:04:46 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:13.392 07:04:46 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:13.392 07:04:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:13.392 No valid GPT data, bailing 00:04:13.392 07:04:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:13.392 07:04:46 -- scripts/common.sh@393 -- # pt= 00:04:13.392 07:04:46 -- scripts/common.sh@394 -- # return 1 00:04:13.392 07:04:46 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:13.392 1+0 records in 00:04:13.392 1+0 records out 00:04:13.392 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0339842 s, 30.9 MB/s 00:04:13.392 07:04:46 -- spdk/autotest.sh@129 -- # sync 00:04:13.651 07:04:47 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:13.651 07:04:47 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:13.651 07:04:47 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:15.028 07:04:48 -- spdk/autotest.sh@135 -- # uname -s 00:04:15.028 07:04:48 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:15.028 07:04:48 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:15.028 07:04:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:15.028 07:04:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:15.028 07:04:48 -- common/autotest_common.sh@10 -- # set +x 00:04:15.028 ************************************ 00:04:15.028 START TEST setup.sh 00:04:15.028 ************************************ 00:04:15.028 07:04:48 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:04:15.028 * Looking for test storage... 00:04:15.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:15.028 07:04:48 -- setup/test-setup.sh@10 -- # uname -s 00:04:15.028 07:04:48 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:15.028 07:04:48 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:15.028 07:04:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:15.028 07:04:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:15.028 07:04:48 -- common/autotest_common.sh@10 -- # set +x 00:04:15.028 ************************************ 00:04:15.028 START TEST acl 00:04:15.028 ************************************ 00:04:15.028 07:04:48 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:04:15.028 * Looking for test storage... 00:04:15.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:15.028 07:04:48 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:15.028 07:04:48 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:04:15.028 07:04:48 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:04:15.028 07:04:48 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:04:15.028 07:04:48 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:04:15.028 07:04:48 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:04:15.028 07:04:48 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:04:15.028 07:04:48 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:15.028 07:04:48 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:04:15.028 07:04:48 -- setup/acl.sh@12 -- # devs=() 00:04:15.028 07:04:48 -- setup/acl.sh@12 -- # declare -a devs 00:04:15.028 07:04:48 -- setup/acl.sh@13 -- # drivers=() 00:04:15.028 07:04:48 -- setup/acl.sh@13 -- # declare -A drivers 00:04:15.028 07:04:48 -- setup/acl.sh@51 -- # setup reset 00:04:15.028 07:04:48 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.028 07:04:48 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:15.288 07:04:48 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:15.288 07:04:48 -- setup/acl.sh@16 -- # local dev driver 00:04:15.288 07:04:48 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.288 07:04:48 -- setup/acl.sh@15 -- # setup output status 00:04:15.288 07:04:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.288 07:04:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:15.547 Hugepages 00:04:15.547 node hugesize free / total 00:04:15.547 07:04:49 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:15.547 07:04:49 -- setup/acl.sh@19 -- # continue 00:04:15.547 07:04:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.547 00:04:15.547 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.547 07:04:49 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:15.547 07:04:49 -- setup/acl.sh@19 -- # continue 00:04:15.547 07:04:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.547 07:04:49 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:04:15.547 07:04:49 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:04:15.547 07:04:49 -- setup/acl.sh@20 -- # continue 00:04:15.547 07:04:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.806 07:04:49 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:04:15.806 07:04:49 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:15.806 07:04:49 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:15.806 07:04:49 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:15.806 07:04:49 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:15.806 07:04:49 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.806 07:04:49 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:15.806 07:04:49 -- setup/acl.sh@54 -- # run_test denied denied 00:04:15.806 07:04:49 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:15.806 07:04:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:15.806 07:04:49 -- common/autotest_common.sh@10 -- # set +x 00:04:15.806 ************************************ 00:04:15.806 START TEST denied 00:04:15.806 ************************************ 00:04:15.806 07:04:49 -- common/autotest_common.sh@1102 -- # denied 00:04:15.806 07:04:49 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:04:15.806 07:04:49 -- setup/acl.sh@38 -- # setup output config 00:04:15.806 07:04:49 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:04:15.806 07:04:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.806 07:04:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:17.710 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:04:17.710 07:04:51 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:04:17.710 07:04:51 -- setup/acl.sh@28 -- # local dev driver 00:04:17.710 07:04:51 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:17.710 07:04:51 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:04:17.710 07:04:51 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:04:17.710 07:04:51 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:17.710 07:04:51 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:17.710 07:04:51 -- setup/acl.sh@41 -- # setup reset 00:04:17.710 07:04:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.710 07:04:51 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.969 00:04:17.969 real 0m2.242s 00:04:17.969 user 0m0.516s 00:04:17.969 sys 0m1.779s 00:04:17.969 07:04:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:17.969 ************************************ 00:04:17.969 END TEST denied 00:04:17.969 ************************************ 00:04:17.969 07:04:51 -- common/autotest_common.sh@10 -- # set +x 00:04:17.969 07:04:51 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:17.969 07:04:51 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:17.969 07:04:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:17.969 07:04:51 -- common/autotest_common.sh@10 -- # set +x 00:04:17.969 ************************************ 00:04:17.969 START TEST allowed 00:04:17.969 ************************************ 00:04:17.969 07:04:51 -- common/autotest_common.sh@1102 -- # allowed 00:04:17.969 07:04:51 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:17.969 07:04:51 -- setup/acl.sh@45 -- # setup output config 00:04:17.969 07:04:51 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:17.969 07:04:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.969 07:04:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.871 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.871 07:04:53 -- setup/acl.sh@47 -- # verify 00:04:19.871 07:04:53 -- setup/acl.sh@28 -- # local dev driver 00:04:19.871 07:04:53 -- setup/acl.sh@48 -- # setup reset 00:04:19.871 07:04:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.871 07:04:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:20.130 00:04:20.130 real 0m2.080s 00:04:20.130 user 0m0.439s 00:04:20.130 sys 0m1.559s 00:04:20.130 07:04:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:20.130 07:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:20.130 ************************************ 00:04:20.130 END TEST allowed 00:04:20.130 ************************************ 00:04:20.130 ************************************ 00:04:20.130 END TEST acl 00:04:20.130 ************************************ 00:04:20.130 00:04:20.130 real 0m5.236s 00:04:20.130 user 0m1.537s 00:04:20.130 sys 0m3.699s 00:04:20.130 07:04:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:20.130 07:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:20.130 07:04:53 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:20.130 07:04:53 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:20.130 07:04:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:20.130 07:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:20.130 ************************************ 00:04:20.130 START TEST hugepages 00:04:20.130 ************************************ 00:04:20.130 07:04:53 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:20.130 * Looking for test storage... 00:04:20.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:20.130 07:04:53 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:20.130 07:04:53 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:20.130 07:04:53 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:20.130 07:04:53 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:20.130 07:04:53 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:20.130 07:04:53 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:20.130 07:04:53 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:20.130 07:04:53 -- setup/common.sh@18 -- # local node= 00:04:20.130 07:04:53 -- setup/common.sh@19 -- # local var val 00:04:20.130 07:04:53 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.130 07:04:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.130 07:04:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.130 07:04:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.130 07:04:53 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.130 07:04:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.130 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.130 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.130 07:04:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 3042892 kB' 'MemAvailable: 7412356 kB' 'Buffers: 38140 kB' 'Cached: 4448976 kB' 'SwapCached: 0 kB' 'Active: 1206828 kB' 'Inactive: 3408420 kB' 'Active(anon): 133084 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073744 kB' 'Inactive(file): 3406628 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 472 kB' 'Writeback: 0 kB' 'AnonPages: 147580 kB' 'Mapped: 75856 kB' 'Shmem: 2636 kB' 'KReclaimable: 211008 kB' 'Slab: 309516 kB' 'SReclaimable: 211008 kB' 'SUnreclaim: 98508 kB' 'KernelStack: 4804 kB' 'PageTables: 3948 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 4034476 kB' 'Committed_AS: 721724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14432 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:20.130 07:04:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.130 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.130 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.130 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.130 07:04:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.130 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.130 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.130 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.130 07:04:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.130 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.130 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.130 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.130 07:04:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.130 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.130 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.130 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.130 07:04:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.130 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.130 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.131 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.131 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.132 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.132 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.132 07:04:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.132 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.132 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.132 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.132 07:04:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.132 07:04:53 -- setup/common.sh@32 -- # continue 00:04:20.132 07:04:53 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.132 07:04:53 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.132 07:04:53 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.132 07:04:53 -- setup/common.sh@33 -- # echo 2048 00:04:20.132 07:04:53 -- setup/common.sh@33 -- # return 0 00:04:20.132 07:04:53 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:20.132 07:04:53 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:20.132 07:04:53 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:20.132 07:04:53 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:20.132 07:04:53 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:20.132 07:04:53 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:20.132 07:04:53 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:20.132 07:04:53 -- setup/hugepages.sh@207 -- # get_nodes 00:04:20.132 07:04:53 -- setup/hugepages.sh@27 -- # local node 00:04:20.132 07:04:53 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.391 07:04:53 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:20.391 07:04:53 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:20.391 07:04:53 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.391 07:04:53 -- setup/hugepages.sh@208 -- # clear_hp 00:04:20.391 07:04:53 -- setup/hugepages.sh@37 -- # local node hp 00:04:20.391 07:04:53 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:20.391 07:04:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.391 07:04:53 -- setup/hugepages.sh@41 -- # echo 0 00:04:20.391 07:04:53 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.391 07:04:53 -- setup/hugepages.sh@41 -- # echo 0 00:04:20.391 07:04:53 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:20.391 07:04:53 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:20.391 07:04:53 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:20.391 07:04:53 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:20.391 07:04:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:20.391 07:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:20.391 ************************************ 00:04:20.391 START TEST default_setup 00:04:20.391 ************************************ 00:04:20.391 07:04:53 -- common/autotest_common.sh@1102 -- # default_setup 00:04:20.391 07:04:53 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:20.391 07:04:53 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:20.391 07:04:53 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:20.391 07:04:53 -- setup/hugepages.sh@51 -- # shift 00:04:20.391 07:04:53 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:20.391 07:04:53 -- setup/hugepages.sh@52 -- # local node_ids 00:04:20.391 07:04:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.391 07:04:53 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:20.391 07:04:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:20.391 07:04:53 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:20.391 07:04:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.391 07:04:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:20.391 07:04:53 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:20.391 07:04:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.391 07:04:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.391 07:04:53 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:20.391 07:04:53 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:20.391 07:04:53 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:20.391 07:04:53 -- setup/hugepages.sh@73 -- # return 0 00:04:20.391 07:04:53 -- setup/hugepages.sh@137 -- # setup output 00:04:20.391 07:04:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.391 07:04:53 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:20.650 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:04:20.650 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:21.263 07:04:54 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:21.263 07:04:54 -- setup/hugepages.sh@89 -- # local node 00:04:21.263 07:04:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.263 07:04:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.263 07:04:54 -- setup/hugepages.sh@92 -- # local surp 00:04:21.263 07:04:54 -- setup/hugepages.sh@93 -- # local resv 00:04:21.263 07:04:54 -- setup/hugepages.sh@94 -- # local anon 00:04:21.263 07:04:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.263 07:04:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.263 07:04:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.263 07:04:54 -- setup/common.sh@18 -- # local node= 00:04:21.263 07:04:54 -- setup/common.sh@19 -- # local var val 00:04:21.263 07:04:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.263 07:04:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.263 07:04:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.263 07:04:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.263 07:04:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.263 07:04:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5135536 kB' 'MemAvailable: 9505332 kB' 'Buffers: 38140 kB' 'Cached: 4448900 kB' 'SwapCached: 0 kB' 'Active: 1213932 kB' 'Inactive: 3408660 kB' 'Active(anon): 140048 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073884 kB' 'Inactive(file): 3406868 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 154164 kB' 'Mapped: 75584 kB' 'Shmem: 2628 kB' 'KReclaimable: 210960 kB' 'Slab: 310020 kB' 'SReclaimable: 210960 kB' 'SUnreclaim: 99060 kB' 'KernelStack: 4640 kB' 'PageTables: 3524 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5083052 kB' 'Committed_AS: 720068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14432 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.263 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.263 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.264 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.264 07:04:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.264 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.264 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.264 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.264 07:04:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.264 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.264 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.264 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.264 07:04:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.264 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.264 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.264 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.264 07:04:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.264 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.264 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.264 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.264 07:04:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.264 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.264 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.264 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.537 07:04:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.537 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.538 07:04:54 -- setup/common.sh@33 -- # echo 0 00:04:21.538 07:04:54 -- setup/common.sh@33 -- # return 0 00:04:21.538 07:04:54 -- setup/hugepages.sh@97 -- # anon=0 00:04:21.538 07:04:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.538 07:04:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.538 07:04:54 -- setup/common.sh@18 -- # local node= 00:04:21.538 07:04:54 -- setup/common.sh@19 -- # local var val 00:04:21.538 07:04:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.538 07:04:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.538 07:04:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.538 07:04:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.538 07:04:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.538 07:04:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5135796 kB' 'MemAvailable: 9505592 kB' 'Buffers: 38140 kB' 'Cached: 4448900 kB' 'SwapCached: 0 kB' 'Active: 1214188 kB' 'Inactive: 3408660 kB' 'Active(anon): 140304 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073884 kB' 'Inactive(file): 3406868 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 154420 kB' 'Mapped: 75584 kB' 'Shmem: 2628 kB' 'KReclaimable: 210960 kB' 'Slab: 310020 kB' 'SReclaimable: 210960 kB' 'SUnreclaim: 99060 kB' 'KernelStack: 4640 kB' 'PageTables: 3520 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5083052 kB' 'Committed_AS: 720068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14432 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.538 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.538 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.539 07:04:54 -- setup/common.sh@33 -- # echo 0 00:04:21.539 07:04:54 -- setup/common.sh@33 -- # return 0 00:04:21.539 07:04:54 -- setup/hugepages.sh@99 -- # surp=0 00:04:21.539 07:04:54 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.539 07:04:54 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.539 07:04:54 -- setup/common.sh@18 -- # local node= 00:04:21.539 07:04:54 -- setup/common.sh@19 -- # local var val 00:04:21.539 07:04:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.539 07:04:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.539 07:04:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.539 07:04:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.539 07:04:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.539 07:04:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5135788 kB' 'MemAvailable: 9505584 kB' 'Buffers: 38140 kB' 'Cached: 4448900 kB' 'SwapCached: 0 kB' 'Active: 1214348 kB' 'Inactive: 3408660 kB' 'Active(anon): 140464 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073884 kB' 'Inactive(file): 3406868 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 154580 kB' 'Mapped: 75584 kB' 'Shmem: 2628 kB' 'KReclaimable: 210960 kB' 'Slab: 310020 kB' 'SReclaimable: 210960 kB' 'SUnreclaim: 99060 kB' 'KernelStack: 4624 kB' 'PageTables: 3496 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5083052 kB' 'Committed_AS: 720068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14448 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.539 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.539 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.540 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.540 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.541 07:04:54 -- setup/common.sh@33 -- # echo 0 00:04:21.541 07:04:54 -- setup/common.sh@33 -- # return 0 00:04:21.541 07:04:54 -- setup/hugepages.sh@100 -- # resv=0 00:04:21.541 nr_hugepages=1024 00:04:21.541 resv_hugepages=0 00:04:21.541 07:04:54 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:21.541 07:04:54 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.541 surplus_hugepages=0 00:04:21.541 anon_hugepages=0 00:04:21.541 07:04:54 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.541 07:04:54 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.541 07:04:54 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.541 07:04:54 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:21.541 07:04:54 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.541 07:04:54 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.541 07:04:54 -- setup/common.sh@18 -- # local node= 00:04:21.541 07:04:54 -- setup/common.sh@19 -- # local var val 00:04:21.541 07:04:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.541 07:04:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.541 07:04:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.541 07:04:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.541 07:04:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.541 07:04:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5136008 kB' 'MemAvailable: 9505804 kB' 'Buffers: 38140 kB' 'Cached: 4448900 kB' 'SwapCached: 0 kB' 'Active: 1213896 kB' 'Inactive: 3408660 kB' 'Active(anon): 140012 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073884 kB' 'Inactive(file): 3406868 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 154344 kB' 'Mapped: 75272 kB' 'Shmem: 2628 kB' 'KReclaimable: 210960 kB' 'Slab: 310032 kB' 'SReclaimable: 210960 kB' 'SUnreclaim: 99072 kB' 'KernelStack: 4676 kB' 'PageTables: 3476 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5083052 kB' 'Committed_AS: 730504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14448 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.541 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.541 07:04:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.542 07:04:54 -- setup/common.sh@33 -- # echo 1024 00:04:21.542 07:04:54 -- setup/common.sh@33 -- # return 0 00:04:21.542 07:04:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.542 07:04:54 -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.542 07:04:54 -- setup/hugepages.sh@27 -- # local node 00:04:21.542 07:04:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.542 07:04:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.542 07:04:54 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:21.542 07:04:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.542 07:04:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.542 07:04:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.542 07:04:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.542 07:04:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.542 07:04:54 -- setup/common.sh@18 -- # local node=0 00:04:21.542 07:04:54 -- setup/common.sh@19 -- # local var val 00:04:21.542 07:04:54 -- setup/common.sh@20 -- # local mem_f mem 00:04:21.542 07:04:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.542 07:04:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.542 07:04:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.542 07:04:54 -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.542 07:04:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5135984 kB' 'MemUsed: 7127272 kB' 'Active: 1213992 kB' 'Inactive: 3408660 kB' 'Active(anon): 140108 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073884 kB' 'Inactive(file): 3406868 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'FilePages: 4487040 kB' 'Mapped: 75272 kB' 'AnonPages: 154400 kB' 'Shmem: 2628 kB' 'KernelStack: 4728 kB' 'PageTables: 3844 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 210960 kB' 'Slab: 310032 kB' 'SReclaimable: 210960 kB' 'SUnreclaim: 99072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.542 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.542 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # continue 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # IFS=': ' 00:04:21.543 07:04:54 -- setup/common.sh@31 -- # read -r var val _ 00:04:21.543 07:04:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.543 07:04:54 -- setup/common.sh@33 -- # echo 0 00:04:21.543 07:04:54 -- setup/common.sh@33 -- # return 0 00:04:21.543 07:04:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.543 07:04:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.543 07:04:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.543 07:04:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.543 07:04:54 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:21.543 node0=1024 expecting 1024 00:04:21.543 07:04:54 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:21.543 00:04:21.543 real 0m1.148s 00:04:21.543 user 0m0.322s 00:04:21.543 sys 0m0.746s 00:04:21.543 07:04:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:21.543 07:04:54 -- common/autotest_common.sh@10 -- # set +x 00:04:21.543 ************************************ 00:04:21.543 END TEST default_setup 00:04:21.543 ************************************ 00:04:21.543 07:04:55 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:21.543 07:04:55 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:21.543 07:04:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:21.543 07:04:55 -- common/autotest_common.sh@10 -- # set +x 00:04:21.543 ************************************ 00:04:21.543 START TEST per_node_1G_alloc 00:04:21.543 ************************************ 00:04:21.543 07:04:55 -- common/autotest_common.sh@1102 -- # per_node_1G_alloc 00:04:21.543 07:04:55 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:21.543 07:04:55 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:21.543 07:04:55 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:21.543 07:04:55 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:21.543 07:04:55 -- setup/hugepages.sh@51 -- # shift 00:04:21.543 07:04:55 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:21.543 07:04:55 -- setup/hugepages.sh@52 -- # local node_ids 00:04:21.543 07:04:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.543 07:04:55 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:21.543 07:04:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:21.543 07:04:55 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:21.543 07:04:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.543 07:04:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:21.543 07:04:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:21.543 07:04:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.543 07:04:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.543 07:04:55 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:21.543 07:04:55 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:21.543 07:04:55 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:21.543 07:04:55 -- setup/hugepages.sh@73 -- # return 0 00:04:21.543 07:04:55 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:21.543 07:04:55 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:21.543 07:04:55 -- setup/hugepages.sh@146 -- # setup output 00:04:21.543 07:04:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.543 07:04:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:21.802 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:04:21.802 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:22.064 07:04:55 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:22.064 07:04:55 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:22.064 07:04:55 -- setup/hugepages.sh@89 -- # local node 00:04:22.064 07:04:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.064 07:04:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.064 07:04:55 -- setup/hugepages.sh@92 -- # local surp 00:04:22.064 07:04:55 -- setup/hugepages.sh@93 -- # local resv 00:04:22.064 07:04:55 -- setup/hugepages.sh@94 -- # local anon 00:04:22.064 07:04:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.064 07:04:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.064 07:04:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.064 07:04:55 -- setup/common.sh@18 -- # local node= 00:04:22.064 07:04:55 -- setup/common.sh@19 -- # local var val 00:04:22.064 07:04:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.064 07:04:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.064 07:04:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.064 07:04:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.064 07:04:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.064 07:04:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 6184244 kB' 'MemAvailable: 10554044 kB' 'Buffers: 38140 kB' 'Cached: 4448904 kB' 'SwapCached: 0 kB' 'Active: 1214072 kB' 'Inactive: 3408664 kB' 'Active(anon): 140188 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073884 kB' 'Inactive(file): 3406872 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 154288 kB' 'Mapped: 75272 kB' 'Shmem: 2628 kB' 'KReclaimable: 210960 kB' 'Slab: 309972 kB' 'SReclaimable: 210960 kB' 'SUnreclaim: 99012 kB' 'KernelStack: 4612 kB' 'PageTables: 3420 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5607340 kB' 'Committed_AS: 728232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14464 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.064 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.064 07:04:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:22.065 07:04:55 -- setup/common.sh@33 -- # echo 0 00:04:22.065 07:04:55 -- setup/common.sh@33 -- # return 0 00:04:22.065 07:04:55 -- setup/hugepages.sh@97 -- # anon=0 00:04:22.065 07:04:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:22.065 07:04:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.065 07:04:55 -- setup/common.sh@18 -- # local node= 00:04:22.065 07:04:55 -- setup/common.sh@19 -- # local var val 00:04:22.065 07:04:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.065 07:04:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.065 07:04:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.065 07:04:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.065 07:04:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.065 07:04:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 6184504 kB' 'MemAvailable: 10554304 kB' 'Buffers: 38140 kB' 'Cached: 4448904 kB' 'SwapCached: 0 kB' 'Active: 1214332 kB' 'Inactive: 3408664 kB' 'Active(anon): 140448 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073884 kB' 'Inactive(file): 3406872 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 154548 kB' 'Mapped: 75272 kB' 'Shmem: 2628 kB' 'KReclaimable: 210960 kB' 'Slab: 309972 kB' 'SReclaimable: 210960 kB' 'SUnreclaim: 99012 kB' 'KernelStack: 4612 kB' 'PageTables: 3420 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5607340 kB' 'Committed_AS: 728232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14464 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.065 07:04:55 -- setup/common.sh@33 -- # echo 0 00:04:22.065 07:04:55 -- setup/common.sh@33 -- # return 0 00:04:22.065 07:04:55 -- setup/hugepages.sh@99 -- # surp=0 00:04:22.065 07:04:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:22.065 07:04:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:22.065 07:04:55 -- setup/common.sh@18 -- # local node= 00:04:22.065 07:04:55 -- setup/common.sh@19 -- # local var val 00:04:22.065 07:04:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.065 07:04:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.065 07:04:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.065 07:04:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.065 07:04:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.065 07:04:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 6184504 kB' 'MemAvailable: 10554304 kB' 'Buffers: 38140 kB' 'Cached: 4448904 kB' 'SwapCached: 0 kB' 'Active: 1214332 kB' 'Inactive: 3408664 kB' 'Active(anon): 140448 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073884 kB' 'Inactive(file): 3406872 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 154032 kB' 'Mapped: 75272 kB' 'Shmem: 2628 kB' 'KReclaimable: 210960 kB' 'Slab: 309972 kB' 'SReclaimable: 210960 kB' 'SUnreclaim: 99012 kB' 'KernelStack: 4612 kB' 'PageTables: 3420 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5607340 kB' 'Committed_AS: 728232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14464 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.065 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.065 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:22.066 07:04:55 -- setup/common.sh@33 -- # echo 0 00:04:22.066 07:04:55 -- setup/common.sh@33 -- # return 0 00:04:22.066 07:04:55 -- setup/hugepages.sh@100 -- # resv=0 00:04:22.066 07:04:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:22.066 nr_hugepages=512 00:04:22.066 07:04:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:22.066 resv_hugepages=0 00:04:22.066 surplus_hugepages=0 00:04:22.066 07:04:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:22.066 anon_hugepages=0 00:04:22.066 07:04:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:22.066 07:04:55 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:22.066 07:04:55 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:22.066 07:04:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:22.066 07:04:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:22.066 07:04:55 -- setup/common.sh@18 -- # local node= 00:04:22.066 07:04:55 -- setup/common.sh@19 -- # local var val 00:04:22.066 07:04:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.066 07:04:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.066 07:04:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.066 07:04:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.066 07:04:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.066 07:04:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 6184764 kB' 'MemAvailable: 10554564 kB' 'Buffers: 38140 kB' 'Cached: 4448904 kB' 'SwapCached: 0 kB' 'Active: 1214332 kB' 'Inactive: 3408664 kB' 'Active(anon): 140448 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073884 kB' 'Inactive(file): 3406872 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 154164 kB' 'Mapped: 75272 kB' 'Shmem: 2628 kB' 'KReclaimable: 210960 kB' 'Slab: 309972 kB' 'SReclaimable: 210960 kB' 'SUnreclaim: 99012 kB' 'KernelStack: 4680 kB' 'PageTables: 3420 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5607340 kB' 'Committed_AS: 732944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14480 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.066 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.066 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:22.067 07:04:55 -- setup/common.sh@33 -- # echo 512 00:04:22.067 07:04:55 -- setup/common.sh@33 -- # return 0 00:04:22.067 07:04:55 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:22.067 07:04:55 -- setup/hugepages.sh@112 -- # get_nodes 00:04:22.067 07:04:55 -- setup/hugepages.sh@27 -- # local node 00:04:22.067 07:04:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:22.067 07:04:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:22.067 07:04:55 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:22.067 07:04:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:22.067 07:04:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:22.067 07:04:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:22.067 07:04:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:22.067 07:04:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:22.067 07:04:55 -- setup/common.sh@18 -- # local node=0 00:04:22.067 07:04:55 -- setup/common.sh@19 -- # local var val 00:04:22.067 07:04:55 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.067 07:04:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.067 07:04:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:22.067 07:04:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:22.067 07:04:55 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.067 07:04:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 6184520 kB' 'MemUsed: 6078736 kB' 'Active: 1214332 kB' 'Inactive: 3408664 kB' 'Active(anon): 140448 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073884 kB' 'Inactive(file): 3406872 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'FilePages: 4487044 kB' 'Mapped: 75272 kB' 'AnonPages: 154164 kB' 'Shmem: 2628 kB' 'KernelStack: 4748 kB' 'PageTables: 3808 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 210960 kB' 'Slab: 309972 kB' 'SReclaimable: 210960 kB' 'SUnreclaim: 99012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # continue 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # IFS=': ' 00:04:22.067 07:04:55 -- setup/common.sh@31 -- # read -r var val _ 00:04:22.067 07:04:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:22.067 07:04:55 -- setup/common.sh@33 -- # echo 0 00:04:22.067 07:04:55 -- setup/common.sh@33 -- # return 0 00:04:22.067 07:04:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:22.067 07:04:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:22.067 07:04:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:22.067 07:04:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:22.067 07:04:55 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:22.067 node0=512 expecting 512 00:04:22.067 07:04:55 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:22.067 00:04:22.067 real 0m0.655s 00:04:22.067 user 0m0.266s 00:04:22.067 sys 0m0.423s 00:04:22.067 07:04:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:22.067 07:04:55 -- common/autotest_common.sh@10 -- # set +x 00:04:22.068 ************************************ 00:04:22.068 END TEST per_node_1G_alloc 00:04:22.068 ************************************ 00:04:22.068 07:04:55 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:22.068 07:04:55 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:22.068 07:04:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:22.068 07:04:55 -- common/autotest_common.sh@10 -- # set +x 00:04:22.068 ************************************ 00:04:22.068 START TEST even_2G_alloc 00:04:22.068 ************************************ 00:04:22.068 07:04:55 -- common/autotest_common.sh@1102 -- # even_2G_alloc 00:04:22.068 07:04:55 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:22.068 07:04:55 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:22.068 07:04:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:22.068 07:04:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:22.068 07:04:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:22.068 07:04:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:22.068 07:04:55 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:22.068 07:04:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:22.068 07:04:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:22.068 07:04:55 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:22.068 07:04:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:22.068 07:04:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:22.068 07:04:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:22.068 07:04:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:22.068 07:04:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.068 07:04:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:22.068 07:04:55 -- setup/hugepages.sh@83 -- # : 0 00:04:22.068 07:04:55 -- setup/hugepages.sh@84 -- # : 0 00:04:22.068 07:04:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:22.068 07:04:55 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:22.068 07:04:55 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:22.068 07:04:55 -- setup/hugepages.sh@153 -- # setup output 00:04:22.068 07:04:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.068 07:04:55 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.634 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:04:22.634 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:22.893 07:04:56 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:22.893 07:04:56 -- setup/hugepages.sh@89 -- # local node 00:04:22.893 07:04:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:22.893 07:04:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:22.893 07:04:56 -- setup/hugepages.sh@92 -- # local surp 00:04:22.893 07:04:56 -- setup/hugepages.sh@93 -- # local resv 00:04:22.893 07:04:56 -- setup/hugepages.sh@94 -- # local anon 00:04:22.893 07:04:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:22.893 07:04:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:22.893 07:04:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:22.893 07:04:56 -- setup/common.sh@18 -- # local node= 00:04:22.893 07:04:56 -- setup/common.sh@19 -- # local var val 00:04:22.893 07:04:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:22.893 07:04:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:22.893 07:04:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:22.893 07:04:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:22.893 07:04:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:22.893 07:04:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:22.893 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.155 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.155 07:04:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5135052 kB' 'MemAvailable: 9504864 kB' 'Buffers: 38140 kB' 'Cached: 4448900 kB' 'SwapCached: 0 kB' 'Active: 1214028 kB' 'Inactive: 3408652 kB' 'Active(anon): 140136 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073892 kB' 'Inactive(file): 3406860 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 154176 kB' 'Mapped: 75228 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 310268 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 99292 kB' 'KernelStack: 4648 kB' 'PageTables: 3356 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5083052 kB' 'Committed_AS: 731800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14480 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:23.155 07:04:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.155 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.155 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.155 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.155 07:04:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.155 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.155 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.155 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.155 07:04:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.155 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.155 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.155 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.155 07:04:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.155 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.155 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.155 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.156 07:04:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.156 07:04:56 -- setup/common.sh@33 -- # echo 0 00:04:23.156 07:04:56 -- setup/common.sh@33 -- # return 0 00:04:23.156 07:04:56 -- setup/hugepages.sh@97 -- # anon=0 00:04:23.156 07:04:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.156 07:04:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.156 07:04:56 -- setup/common.sh@18 -- # local node= 00:04:23.156 07:04:56 -- setup/common.sh@19 -- # local var val 00:04:23.156 07:04:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.156 07:04:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.156 07:04:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.156 07:04:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.156 07:04:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.156 07:04:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.156 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5135052 kB' 'MemAvailable: 9504864 kB' 'Buffers: 38140 kB' 'Cached: 4448900 kB' 'SwapCached: 0 kB' 'Active: 1214288 kB' 'Inactive: 3408652 kB' 'Active(anon): 140396 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073892 kB' 'Inactive(file): 3406860 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 154436 kB' 'Mapped: 75228 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 310268 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 99292 kB' 'KernelStack: 4648 kB' 'PageTables: 3356 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5083052 kB' 'Committed_AS: 731800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14480 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.157 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.157 07:04:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.158 07:04:56 -- setup/common.sh@33 -- # echo 0 00:04:23.158 07:04:56 -- setup/common.sh@33 -- # return 0 00:04:23.158 07:04:56 -- setup/hugepages.sh@99 -- # surp=0 00:04:23.158 07:04:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.158 07:04:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.158 07:04:56 -- setup/common.sh@18 -- # local node= 00:04:23.158 07:04:56 -- setup/common.sh@19 -- # local var val 00:04:23.158 07:04:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.158 07:04:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.158 07:04:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.158 07:04:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.158 07:04:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.158 07:04:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5135464 kB' 'MemAvailable: 9505276 kB' 'Buffers: 38140 kB' 'Cached: 4448900 kB' 'SwapCached: 0 kB' 'Active: 1214488 kB' 'Inactive: 3408652 kB' 'Active(anon): 140596 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073892 kB' 'Inactive(file): 3406860 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 154248 kB' 'Mapped: 75228 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 310140 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 99164 kB' 'KernelStack: 4616 kB' 'PageTables: 3300 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5083052 kB' 'Committed_AS: 730892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14480 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.158 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.158 07:04:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.159 07:04:56 -- setup/common.sh@33 -- # echo 0 00:04:23.159 07:04:56 -- setup/common.sh@33 -- # return 0 00:04:23.159 07:04:56 -- setup/hugepages.sh@100 -- # resv=0 00:04:23.159 07:04:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:23.159 nr_hugepages=1024 00:04:23.159 07:04:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.159 resv_hugepages=0 00:04:23.159 surplus_hugepages=0 00:04:23.159 07:04:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.159 anon_hugepages=0 00:04:23.159 07:04:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.159 07:04:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.159 07:04:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:23.159 07:04:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.159 07:04:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.159 07:04:56 -- setup/common.sh@18 -- # local node= 00:04:23.159 07:04:56 -- setup/common.sh@19 -- # local var val 00:04:23.159 07:04:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.159 07:04:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.159 07:04:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.159 07:04:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.159 07:04:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.159 07:04:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5135164 kB' 'MemAvailable: 9504976 kB' 'Buffers: 38140 kB' 'Cached: 4448900 kB' 'SwapCached: 0 kB' 'Active: 1214228 kB' 'Inactive: 3408652 kB' 'Active(anon): 140336 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073892 kB' 'Inactive(file): 3406860 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 153860 kB' 'Mapped: 75228 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 310140 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 99164 kB' 'KernelStack: 4684 kB' 'PageTables: 3300 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5083052 kB' 'Committed_AS: 741004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14480 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.159 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.159 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.160 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.160 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.161 07:04:56 -- setup/common.sh@33 -- # echo 1024 00:04:23.161 07:04:56 -- setup/common.sh@33 -- # return 0 00:04:23.161 07:04:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.161 07:04:56 -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.161 07:04:56 -- setup/hugepages.sh@27 -- # local node 00:04:23.161 07:04:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.161 07:04:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:23.161 07:04:56 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:23.161 07:04:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.161 07:04:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.161 07:04:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.161 07:04:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.161 07:04:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.161 07:04:56 -- setup/common.sh@18 -- # local node=0 00:04:23.161 07:04:56 -- setup/common.sh@19 -- # local var val 00:04:23.161 07:04:56 -- setup/common.sh@20 -- # local mem_f mem 00:04:23.161 07:04:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.161 07:04:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.161 07:04:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.161 07:04:56 -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.161 07:04:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5135424 kB' 'MemUsed: 7127832 kB' 'Active: 1214488 kB' 'Inactive: 3408652 kB' 'Active(anon): 140596 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073892 kB' 'Inactive(file): 3406860 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'FilePages: 4487040 kB' 'Mapped: 75228 kB' 'AnonPages: 154380 kB' 'Shmem: 2628 kB' 'KernelStack: 4752 kB' 'PageTables: 3688 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 210976 kB' 'Slab: 310140 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 99164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.161 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.161 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.162 07:04:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.162 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.162 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.162 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.162 07:04:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.162 07:04:56 -- setup/common.sh@32 -- # continue 00:04:23.162 07:04:56 -- setup/common.sh@31 -- # IFS=': ' 00:04:23.162 07:04:56 -- setup/common.sh@31 -- # read -r var val _ 00:04:23.162 07:04:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.162 07:04:56 -- setup/common.sh@33 -- # echo 0 00:04:23.162 07:04:56 -- setup/common.sh@33 -- # return 0 00:04:23.162 07:04:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.162 07:04:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.162 07:04:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.162 07:04:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.162 node0=1024 expecting 1024 00:04:23.162 07:04:56 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:23.162 07:04:56 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:23.162 00:04:23.162 real 0m0.899s 00:04:23.162 user 0m0.211s 00:04:23.162 sys 0m0.725s 00:04:23.162 07:04:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:23.162 07:04:56 -- common/autotest_common.sh@10 -- # set +x 00:04:23.162 ************************************ 00:04:23.162 END TEST even_2G_alloc 00:04:23.162 ************************************ 00:04:23.162 07:04:56 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:23.162 07:04:56 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:23.162 07:04:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:23.162 07:04:56 -- common/autotest_common.sh@10 -- # set +x 00:04:23.162 ************************************ 00:04:23.162 START TEST odd_alloc 00:04:23.162 ************************************ 00:04:23.162 07:04:56 -- common/autotest_common.sh@1102 -- # odd_alloc 00:04:23.162 07:04:56 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:23.162 07:04:56 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:23.162 07:04:56 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:23.162 07:04:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.162 07:04:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:23.162 07:04:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:23.162 07:04:56 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:23.162 07:04:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.162 07:04:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:23.162 07:04:56 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:23.162 07:04:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.162 07:04:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.162 07:04:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:23.162 07:04:56 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:23.162 07:04:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.162 07:04:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:23.162 07:04:56 -- setup/hugepages.sh@83 -- # : 0 00:04:23.162 07:04:56 -- setup/hugepages.sh@84 -- # : 0 00:04:23.162 07:04:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:23.162 07:04:56 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:23.162 07:04:56 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:23.162 07:04:56 -- setup/hugepages.sh@160 -- # setup output 00:04:23.162 07:04:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.162 07:04:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:23.421 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:04:23.421 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.360 07:04:57 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:24.360 07:04:57 -- setup/hugepages.sh@89 -- # local node 00:04:24.360 07:04:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.360 07:04:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.360 07:04:57 -- setup/hugepages.sh@92 -- # local surp 00:04:24.360 07:04:57 -- setup/hugepages.sh@93 -- # local resv 00:04:24.360 07:04:57 -- setup/hugepages.sh@94 -- # local anon 00:04:24.360 07:04:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.360 07:04:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.360 07:04:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.360 07:04:57 -- setup/common.sh@18 -- # local node= 00:04:24.360 07:04:57 -- setup/common.sh@19 -- # local var val 00:04:24.360 07:04:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.360 07:04:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.360 07:04:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.360 07:04:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.360 07:04:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.360 07:04:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5143012 kB' 'MemAvailable: 9512828 kB' 'Buffers: 38140 kB' 'Cached: 4448904 kB' 'SwapCached: 0 kB' 'Active: 1204836 kB' 'Inactive: 3408652 kB' 'Active(anon): 130940 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073896 kB' 'Inactive(file): 3406860 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 144700 kB' 'Mapped: 74752 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 310092 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 99116 kB' 'KernelStack: 4576 kB' 'PageTables: 3412 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5082028 kB' 'Committed_AS: 698036 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14288 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.360 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.360 07:04:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.361 07:04:57 -- setup/common.sh@33 -- # echo 0 00:04:24.361 07:04:57 -- setup/common.sh@33 -- # return 0 00:04:24.361 07:04:57 -- setup/hugepages.sh@97 -- # anon=0 00:04:24.361 07:04:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.361 07:04:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.361 07:04:57 -- setup/common.sh@18 -- # local node= 00:04:24.361 07:04:57 -- setup/common.sh@19 -- # local var val 00:04:24.361 07:04:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.361 07:04:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.361 07:04:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.361 07:04:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.361 07:04:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.361 07:04:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5143012 kB' 'MemAvailable: 9512828 kB' 'Buffers: 38140 kB' 'Cached: 4448904 kB' 'SwapCached: 0 kB' 'Active: 1204836 kB' 'Inactive: 3408652 kB' 'Active(anon): 130940 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073896 kB' 'Inactive(file): 3406860 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 144184 kB' 'Mapped: 74752 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 310092 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 99116 kB' 'KernelStack: 4576 kB' 'PageTables: 3412 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5082028 kB' 'Committed_AS: 702856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14304 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.361 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.361 07:04:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.362 07:04:57 -- setup/common.sh@33 -- # echo 0 00:04:24.362 07:04:57 -- setup/common.sh@33 -- # return 0 00:04:24.362 07:04:57 -- setup/hugepages.sh@99 -- # surp=0 00:04:24.362 07:04:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.362 07:04:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.362 07:04:57 -- setup/common.sh@18 -- # local node= 00:04:24.362 07:04:57 -- setup/common.sh@19 -- # local var val 00:04:24.362 07:04:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.362 07:04:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.362 07:04:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.362 07:04:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.362 07:04:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.362 07:04:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5142988 kB' 'MemAvailable: 9512804 kB' 'Buffers: 38140 kB' 'Cached: 4448904 kB' 'SwapCached: 0 kB' 'Active: 1205024 kB' 'Inactive: 3408652 kB' 'Active(anon): 131128 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073896 kB' 'Inactive(file): 3406860 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 144380 kB' 'Mapped: 74752 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 310092 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 99116 kB' 'KernelStack: 4560 kB' 'PageTables: 3392 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5082028 kB' 'Committed_AS: 702856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14304 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.362 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.362 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.363 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.363 07:04:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.364 07:04:57 -- setup/common.sh@33 -- # echo 0 00:04:24.364 07:04:57 -- setup/common.sh@33 -- # return 0 00:04:24.364 07:04:57 -- setup/hugepages.sh@100 -- # resv=0 00:04:24.364 07:04:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:24.364 nr_hugepages=1025 00:04:24.364 07:04:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.364 resv_hugepages=0 00:04:24.364 surplus_hugepages=0 00:04:24.364 07:04:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.364 anon_hugepages=0 00:04:24.364 07:04:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.364 07:04:57 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:24.364 07:04:57 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:24.364 07:04:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.364 07:04:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.364 07:04:57 -- setup/common.sh@18 -- # local node= 00:04:24.364 07:04:57 -- setup/common.sh@19 -- # local var val 00:04:24.364 07:04:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.364 07:04:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.364 07:04:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.364 07:04:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.364 07:04:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.364 07:04:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5143380 kB' 'MemAvailable: 9513196 kB' 'Buffers: 38140 kB' 'Cached: 4448904 kB' 'SwapCached: 0 kB' 'Active: 1204656 kB' 'Inactive: 3408644 kB' 'Active(anon): 130752 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073904 kB' 'Inactive(file): 3406852 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 144480 kB' 'Mapped: 74664 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 310084 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 99108 kB' 'KernelStack: 4516 kB' 'PageTables: 3108 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5082028 kB' 'Committed_AS: 707688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14320 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.364 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.364 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.365 07:04:57 -- setup/common.sh@33 -- # echo 1025 00:04:24.365 07:04:57 -- setup/common.sh@33 -- # return 0 00:04:24.365 07:04:57 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:24.365 07:04:57 -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.365 07:04:57 -- setup/hugepages.sh@27 -- # local node 00:04:24.365 07:04:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.365 07:04:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:24.365 07:04:57 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.365 07:04:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.365 07:04:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.365 07:04:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.365 07:04:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.365 07:04:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.365 07:04:57 -- setup/common.sh@18 -- # local node=0 00:04:24.365 07:04:57 -- setup/common.sh@19 -- # local var val 00:04:24.365 07:04:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.365 07:04:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.365 07:04:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.365 07:04:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.365 07:04:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.365 07:04:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5143640 kB' 'MemUsed: 7119616 kB' 'Active: 1204640 kB' 'Inactive: 3408648 kB' 'Active(anon): 130736 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073904 kB' 'Inactive(file): 3406856 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'FilePages: 4487048 kB' 'Mapped: 74404 kB' 'AnonPages: 144672 kB' 'Shmem: 2628 kB' 'KernelStack: 4480 kB' 'PageTables: 3180 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 210976 kB' 'Slab: 310132 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 99156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.365 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.365 07:04:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # continue 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.366 07:04:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.366 07:04:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.366 07:04:57 -- setup/common.sh@33 -- # echo 0 00:04:24.366 07:04:57 -- setup/common.sh@33 -- # return 0 00:04:24.366 07:04:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.366 07:04:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.366 07:04:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.366 07:04:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.366 node0=1025 expecting 1025 00:04:24.366 07:04:57 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:24.366 07:04:57 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:24.366 00:04:24.366 real 0m1.142s 00:04:24.366 user 0m0.245s 00:04:24.366 sys 0m0.922s 00:04:24.366 07:04:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.366 07:04:57 -- common/autotest_common.sh@10 -- # set +x 00:04:24.366 ************************************ 00:04:24.366 END TEST odd_alloc 00:04:24.366 ************************************ 00:04:24.366 07:04:57 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:24.366 07:04:57 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:24.366 07:04:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:24.366 07:04:57 -- common/autotest_common.sh@10 -- # set +x 00:04:24.366 ************************************ 00:04:24.366 START TEST custom_alloc 00:04:24.366 ************************************ 00:04:24.366 07:04:57 -- common/autotest_common.sh@1102 -- # custom_alloc 00:04:24.366 07:04:57 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:24.366 07:04:57 -- setup/hugepages.sh@169 -- # local node 00:04:24.366 07:04:57 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:24.366 07:04:57 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:24.366 07:04:57 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:24.366 07:04:57 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:24.366 07:04:57 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:24.366 07:04:57 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:24.366 07:04:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:24.366 07:04:57 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:24.366 07:04:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:24.366 07:04:57 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:24.366 07:04:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.366 07:04:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:24.366 07:04:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.366 07:04:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.366 07:04:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.366 07:04:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.366 07:04:57 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:24.366 07:04:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.366 07:04:57 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:24.366 07:04:57 -- setup/hugepages.sh@83 -- # : 0 00:04:24.366 07:04:57 -- setup/hugepages.sh@84 -- # : 0 00:04:24.366 07:04:57 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:24.366 07:04:57 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:24.366 07:04:57 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:24.366 07:04:57 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:24.366 07:04:57 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:24.366 07:04:57 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:24.366 07:04:57 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:24.366 07:04:57 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:24.366 07:04:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:24.366 07:04:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:24.366 07:04:57 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:24.366 07:04:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:24.366 07:04:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:24.366 07:04:57 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:24.366 07:04:57 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:24.367 07:04:57 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:24.367 07:04:57 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:24.367 07:04:57 -- setup/hugepages.sh@78 -- # return 0 00:04:24.367 07:04:57 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:24.367 07:04:57 -- setup/hugepages.sh@187 -- # setup output 00:04:24.367 07:04:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.367 07:04:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.625 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:04:24.625 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:24.887 07:04:58 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:24.887 07:04:58 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:24.887 07:04:58 -- setup/hugepages.sh@89 -- # local node 00:04:24.887 07:04:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.887 07:04:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.887 07:04:58 -- setup/hugepages.sh@92 -- # local surp 00:04:24.887 07:04:58 -- setup/hugepages.sh@93 -- # local resv 00:04:24.887 07:04:58 -- setup/hugepages.sh@94 -- # local anon 00:04:24.887 07:04:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.887 07:04:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.887 07:04:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.887 07:04:58 -- setup/common.sh@18 -- # local node= 00:04:24.887 07:04:58 -- setup/common.sh@19 -- # local var val 00:04:24.887 07:04:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.887 07:04:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.887 07:04:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.887 07:04:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.887 07:04:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.887 07:04:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.887 07:04:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 6194620 kB' 'MemAvailable: 10564440 kB' 'Buffers: 38140 kB' 'Cached: 4448908 kB' 'SwapCached: 0 kB' 'Active: 1204396 kB' 'Inactive: 3408648 kB' 'Active(anon): 130492 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073904 kB' 'Inactive(file): 3406856 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 144400 kB' 'Mapped: 74356 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 309952 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 98976 kB' 'KernelStack: 4424 kB' 'PageTables: 2868 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5607340 kB' 'Committed_AS: 717356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14304 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:24.887 07:04:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.887 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.887 07:04:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.887 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.887 07:04:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.887 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.887 07:04:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.887 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.887 07:04:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.887 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.887 07:04:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.887 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.887 07:04:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.887 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.887 07:04:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.887 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.887 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.887 07:04:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.888 07:04:58 -- setup/common.sh@33 -- # echo 0 00:04:24.888 07:04:58 -- setup/common.sh@33 -- # return 0 00:04:24.888 07:04:58 -- setup/hugepages.sh@97 -- # anon=0 00:04:24.888 07:04:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.888 07:04:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.888 07:04:58 -- setup/common.sh@18 -- # local node= 00:04:24.888 07:04:58 -- setup/common.sh@19 -- # local var val 00:04:24.888 07:04:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.888 07:04:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.888 07:04:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.888 07:04:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.888 07:04:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.888 07:04:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 6194620 kB' 'MemAvailable: 10564440 kB' 'Buffers: 38140 kB' 'Cached: 4448908 kB' 'SwapCached: 0 kB' 'Active: 1204656 kB' 'Inactive: 3408648 kB' 'Active(anon): 130752 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073904 kB' 'Inactive(file): 3406856 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 144660 kB' 'Mapped: 74356 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 309952 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 98976 kB' 'KernelStack: 4424 kB' 'PageTables: 2868 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5607340 kB' 'Committed_AS: 717356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14304 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.888 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.888 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.889 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.889 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.890 07:04:58 -- setup/common.sh@33 -- # echo 0 00:04:24.890 07:04:58 -- setup/common.sh@33 -- # return 0 00:04:24.890 07:04:58 -- setup/hugepages.sh@99 -- # surp=0 00:04:24.890 07:04:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.890 07:04:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.890 07:04:58 -- setup/common.sh@18 -- # local node= 00:04:24.890 07:04:58 -- setup/common.sh@19 -- # local var val 00:04:24.890 07:04:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.890 07:04:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.890 07:04:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.890 07:04:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.890 07:04:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.890 07:04:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 6194880 kB' 'MemAvailable: 10564700 kB' 'Buffers: 38140 kB' 'Cached: 4448908 kB' 'SwapCached: 0 kB' 'Active: 1204916 kB' 'Inactive: 3408648 kB' 'Active(anon): 131012 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073904 kB' 'Inactive(file): 3406856 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 144920 kB' 'Mapped: 74356 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 309952 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 98976 kB' 'KernelStack: 4424 kB' 'PageTables: 2868 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5607340 kB' 'Committed_AS: 715932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14304 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.890 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.890 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.891 07:04:58 -- setup/common.sh@33 -- # echo 0 00:04:24.891 07:04:58 -- setup/common.sh@33 -- # return 0 00:04:24.891 07:04:58 -- setup/hugepages.sh@100 -- # resv=0 00:04:24.891 07:04:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:24.891 nr_hugepages=512 00:04:24.891 resv_hugepages=0 00:04:24.891 surplus_hugepages=0 00:04:24.891 anon_hugepages=0 00:04:24.891 07:04:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.891 07:04:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.891 07:04:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.891 07:04:58 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:24.891 07:04:58 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:24.891 07:04:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.891 07:04:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.891 07:04:58 -- setup/common.sh@18 -- # local node= 00:04:24.891 07:04:58 -- setup/common.sh@19 -- # local var val 00:04:24.891 07:04:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.891 07:04:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.891 07:04:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.891 07:04:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.891 07:04:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.891 07:04:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 6195164 kB' 'MemAvailable: 10564984 kB' 'Buffers: 38140 kB' 'Cached: 4448908 kB' 'SwapCached: 0 kB' 'Active: 1204864 kB' 'Inactive: 3408648 kB' 'Active(anon): 130960 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073904 kB' 'Inactive(file): 3406856 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'AnonPages: 145000 kB' 'Mapped: 74356 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 309952 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 98976 kB' 'KernelStack: 4460 kB' 'PageTables: 2816 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5607340 kB' 'Committed_AS: 708540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14320 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.891 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.891 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.892 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.892 07:04:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.892 07:04:58 -- setup/common.sh@33 -- # echo 512 00:04:24.892 07:04:58 -- setup/common.sh@33 -- # return 0 00:04:24.892 07:04:58 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:24.892 07:04:58 -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.892 07:04:58 -- setup/hugepages.sh@27 -- # local node 00:04:24.892 07:04:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.892 07:04:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:24.892 07:04:58 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:24.892 07:04:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.892 07:04:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.892 07:04:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.892 07:04:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.892 07:04:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.892 07:04:58 -- setup/common.sh@18 -- # local node=0 00:04:24.892 07:04:58 -- setup/common.sh@19 -- # local var val 00:04:24.892 07:04:58 -- setup/common.sh@20 -- # local mem_f mem 00:04:24.892 07:04:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.892 07:04:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.892 07:04:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.892 07:04:58 -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.892 07:04:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 6195592 kB' 'MemUsed: 6067664 kB' 'Active: 1204440 kB' 'Inactive: 3408648 kB' 'Active(anon): 130536 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073904 kB' 'Inactive(file): 3406856 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 620 kB' 'Writeback: 0 kB' 'FilePages: 4487048 kB' 'Mapped: 74356 kB' 'AnonPages: 144320 kB' 'Shmem: 2628 kB' 'KernelStack: 4512 kB' 'PageTables: 3172 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 210976 kB' 'Slab: 309776 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 98800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # continue 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # IFS=': ' 00:04:24.893 07:04:58 -- setup/common.sh@31 -- # read -r var val _ 00:04:24.893 07:04:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.893 07:04:58 -- setup/common.sh@33 -- # echo 0 00:04:24.893 07:04:58 -- setup/common.sh@33 -- # return 0 00:04:24.893 07:04:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.893 07:04:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.893 07:04:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.893 07:04:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.893 node0=512 expecting 512 00:04:24.893 07:04:58 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:24.893 07:04:58 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:24.893 00:04:24.893 real 0m0.650s 00:04:24.893 user 0m0.252s 00:04:24.893 sys 0m0.432s 00:04:24.893 07:04:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.893 07:04:58 -- common/autotest_common.sh@10 -- # set +x 00:04:24.893 ************************************ 00:04:24.893 END TEST custom_alloc 00:04:24.893 ************************************ 00:04:25.152 07:04:58 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:25.152 07:04:58 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:25.152 07:04:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:25.152 07:04:58 -- common/autotest_common.sh@10 -- # set +x 00:04:25.152 ************************************ 00:04:25.152 START TEST no_shrink_alloc 00:04:25.152 ************************************ 00:04:25.152 07:04:58 -- common/autotest_common.sh@1102 -- # no_shrink_alloc 00:04:25.152 07:04:58 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:25.152 07:04:58 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:25.152 07:04:58 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:25.152 07:04:58 -- setup/hugepages.sh@51 -- # shift 00:04:25.152 07:04:58 -- setup/hugepages.sh@52 -- # node_ids=("$@") 00:04:25.152 07:04:58 -- setup/hugepages.sh@52 -- # local node_ids 00:04:25.152 07:04:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.152 07:04:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:25.152 07:04:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:25.152 07:04:58 -- setup/hugepages.sh@62 -- # user_nodes=("$@") 00:04:25.152 07:04:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.152 07:04:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:25.152 07:04:58 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:25.152 07:04:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.152 07:04:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.152 07:04:58 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:25.152 07:04:58 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:25.152 07:04:58 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:25.152 07:04:58 -- setup/hugepages.sh@73 -- # return 0 00:04:25.152 07:04:58 -- setup/hugepages.sh@198 -- # setup output 00:04:25.152 07:04:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.152 07:04:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.445 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:04:25.445 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.017 07:04:59 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:26.017 07:04:59 -- setup/hugepages.sh@89 -- # local node 00:04:26.017 07:04:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.017 07:04:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.017 07:04:59 -- setup/hugepages.sh@92 -- # local surp 00:04:26.017 07:04:59 -- setup/hugepages.sh@93 -- # local resv 00:04:26.017 07:04:59 -- setup/hugepages.sh@94 -- # local anon 00:04:26.017 07:04:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.017 07:04:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.017 07:04:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.017 07:04:59 -- setup/common.sh@18 -- # local node= 00:04:26.017 07:04:59 -- setup/common.sh@19 -- # local var val 00:04:26.017 07:04:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.017 07:04:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.017 07:04:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.017 07:04:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.017 07:04:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.017 07:04:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5146460 kB' 'MemAvailable: 9516288 kB' 'Buffers: 38148 kB' 'Cached: 4448908 kB' 'SwapCached: 0 kB' 'Active: 1204264 kB' 'Inactive: 3408624 kB' 'Active(anon): 130328 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073936 kB' 'Inactive(file): 3406832 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 144188 kB' 'Mapped: 74368 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 309936 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 98960 kB' 'KernelStack: 4440 kB' 'PageTables: 3240 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5083052 kB' 'Committed_AS: 704000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14272 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.017 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.017 07:04:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.018 07:04:59 -- setup/common.sh@33 -- # echo 0 00:04:26.018 07:04:59 -- setup/common.sh@33 -- # return 0 00:04:26.018 07:04:59 -- setup/hugepages.sh@97 -- # anon=0 00:04:26.018 07:04:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.018 07:04:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.018 07:04:59 -- setup/common.sh@18 -- # local node= 00:04:26.018 07:04:59 -- setup/common.sh@19 -- # local var val 00:04:26.018 07:04:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.018 07:04:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.018 07:04:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.018 07:04:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.018 07:04:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.018 07:04:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5146460 kB' 'MemAvailable: 9516288 kB' 'Buffers: 38148 kB' 'Cached: 4448908 kB' 'SwapCached: 0 kB' 'Active: 1204524 kB' 'Inactive: 3408624 kB' 'Active(anon): 130588 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073936 kB' 'Inactive(file): 3406832 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 144448 kB' 'Mapped: 74368 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 309936 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 98960 kB' 'KernelStack: 4440 kB' 'PageTables: 3240 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5083052 kB' 'Committed_AS: 703104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14272 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.018 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.018 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.019 07:04:59 -- setup/common.sh@33 -- # echo 0 00:04:26.019 07:04:59 -- setup/common.sh@33 -- # return 0 00:04:26.019 07:04:59 -- setup/hugepages.sh@99 -- # surp=0 00:04:26.019 07:04:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.019 07:04:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.019 07:04:59 -- setup/common.sh@18 -- # local node= 00:04:26.019 07:04:59 -- setup/common.sh@19 -- # local var val 00:04:26.019 07:04:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.019 07:04:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.019 07:04:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.019 07:04:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.019 07:04:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.019 07:04:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5146720 kB' 'MemAvailable: 9516548 kB' 'Buffers: 38148 kB' 'Cached: 4448908 kB' 'SwapCached: 0 kB' 'Active: 1204524 kB' 'Inactive: 3408624 kB' 'Active(anon): 130588 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073936 kB' 'Inactive(file): 3406832 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 144320 kB' 'Mapped: 74368 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 309936 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 98960 kB' 'KernelStack: 4440 kB' 'PageTables: 3240 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5083052 kB' 'Committed_AS: 703104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14272 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.019 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.019 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.020 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.020 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.020 07:04:59 -- setup/common.sh@33 -- # echo 0 00:04:26.020 07:04:59 -- setup/common.sh@33 -- # return 0 00:04:26.020 07:04:59 -- setup/hugepages.sh@100 -- # resv=0 00:04:26.020 nr_hugepages=1024 00:04:26.020 07:04:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.020 resv_hugepages=0 00:04:26.020 07:04:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.020 surplus_hugepages=0 00:04:26.020 07:04:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.020 anon_hugepages=0 00:04:26.020 07:04:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.020 07:04:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.020 07:04:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.020 07:04:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.020 07:04:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.020 07:04:59 -- setup/common.sh@18 -- # local node= 00:04:26.020 07:04:59 -- setup/common.sh@19 -- # local var val 00:04:26.021 07:04:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.021 07:04:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.021 07:04:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.021 07:04:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.021 07:04:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.021 07:04:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5146672 kB' 'MemAvailable: 9516500 kB' 'Buffers: 38148 kB' 'Cached: 4448908 kB' 'SwapCached: 0 kB' 'Active: 1204784 kB' 'Inactive: 3408624 kB' 'Active(anon): 130848 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073936 kB' 'Inactive(file): 3406832 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 144580 kB' 'Mapped: 74368 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 309936 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 98960 kB' 'KernelStack: 4508 kB' 'PageTables: 3240 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5083052 kB' 'Committed_AS: 702604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14272 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.021 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.021 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.022 07:04:59 -- setup/common.sh@33 -- # echo 1024 00:04:26.022 07:04:59 -- setup/common.sh@33 -- # return 0 00:04:26.022 07:04:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.022 07:04:59 -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.022 07:04:59 -- setup/hugepages.sh@27 -- # local node 00:04:26.022 07:04:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.022 07:04:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:26.022 07:04:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:26.022 07:04:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.022 07:04:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.022 07:04:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.022 07:04:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.022 07:04:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.022 07:04:59 -- setup/common.sh@18 -- # local node=0 00:04:26.022 07:04:59 -- setup/common.sh@19 -- # local var val 00:04:26.022 07:04:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.022 07:04:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.022 07:04:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.022 07:04:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.022 07:04:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.022 07:04:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5146908 kB' 'MemUsed: 7116348 kB' 'Active: 1204596 kB' 'Inactive: 3408624 kB' 'Active(anon): 130660 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073936 kB' 'Inactive(file): 3406832 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'FilePages: 4487056 kB' 'Mapped: 74356 kB' 'AnonPages: 144692 kB' 'Shmem: 2628 kB' 'KernelStack: 4516 kB' 'PageTables: 3448 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 210976 kB' 'Slab: 310036 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 99060 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.022 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.022 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.023 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.023 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.023 07:04:59 -- setup/common.sh@33 -- # echo 0 00:04:26.023 07:04:59 -- setup/common.sh@33 -- # return 0 00:04:26.023 07:04:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.023 07:04:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.023 07:04:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.023 07:04:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.023 node0=1024 expecting 1024 00:04:26.023 07:04:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:26.023 07:04:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:26.023 07:04:59 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:26.023 07:04:59 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:26.023 07:04:59 -- setup/hugepages.sh@202 -- # setup output 00:04:26.023 07:04:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.023 07:04:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:26.287 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:04:26.287 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:26.287 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:26.287 07:04:59 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:26.287 07:04:59 -- setup/hugepages.sh@89 -- # local node 00:04:26.287 07:04:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.287 07:04:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.287 07:04:59 -- setup/hugepages.sh@92 -- # local surp 00:04:26.287 07:04:59 -- setup/hugepages.sh@93 -- # local resv 00:04:26.287 07:04:59 -- setup/hugepages.sh@94 -- # local anon 00:04:26.287 07:04:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.287 07:04:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.287 07:04:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.287 07:04:59 -- setup/common.sh@18 -- # local node= 00:04:26.287 07:04:59 -- setup/common.sh@19 -- # local var val 00:04:26.287 07:04:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.287 07:04:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.287 07:04:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.287 07:04:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.287 07:04:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.287 07:04:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5146484 kB' 'MemAvailable: 9516312 kB' 'Buffers: 38148 kB' 'Cached: 4448908 kB' 'SwapCached: 0 kB' 'Active: 1205264 kB' 'Inactive: 3408624 kB' 'Active(anon): 131328 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073936 kB' 'Inactive(file): 3406832 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 145464 kB' 'Mapped: 74360 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 310172 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 99196 kB' 'KernelStack: 4508 kB' 'PageTables: 2912 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5083052 kB' 'Committed_AS: 707064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14256 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.287 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.287 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.288 07:04:59 -- setup/common.sh@33 -- # echo 0 00:04:26.288 07:04:59 -- setup/common.sh@33 -- # return 0 00:04:26.288 07:04:59 -- setup/hugepages.sh@97 -- # anon=0 00:04:26.288 07:04:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.288 07:04:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.288 07:04:59 -- setup/common.sh@18 -- # local node= 00:04:26.288 07:04:59 -- setup/common.sh@19 -- # local var val 00:04:26.288 07:04:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.288 07:04:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.288 07:04:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.288 07:04:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.288 07:04:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.288 07:04:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5146516 kB' 'MemAvailable: 9516344 kB' 'Buffers: 38148 kB' 'Cached: 4448908 kB' 'SwapCached: 0 kB' 'Active: 1204864 kB' 'Inactive: 3408624 kB' 'Active(anon): 130928 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073936 kB' 'Inactive(file): 3406832 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 145024 kB' 'Mapped: 74312 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 310264 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 99288 kB' 'KernelStack: 4492 kB' 'PageTables: 2892 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5083052 kB' 'Committed_AS: 712424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14272 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.288 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.288 07:04:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.289 07:04:59 -- setup/common.sh@33 -- # echo 0 00:04:26.289 07:04:59 -- setup/common.sh@33 -- # return 0 00:04:26.289 07:04:59 -- setup/hugepages.sh@99 -- # surp=0 00:04:26.289 07:04:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.289 07:04:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.289 07:04:59 -- setup/common.sh@18 -- # local node= 00:04:26.289 07:04:59 -- setup/common.sh@19 -- # local var val 00:04:26.289 07:04:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.289 07:04:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.289 07:04:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.289 07:04:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.289 07:04:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.289 07:04:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.289 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.289 07:04:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5146776 kB' 'MemAvailable: 9516604 kB' 'Buffers: 38148 kB' 'Cached: 4448908 kB' 'SwapCached: 0 kB' 'Active: 1205124 kB' 'Inactive: 3408624 kB' 'Active(anon): 131188 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073936 kB' 'Inactive(file): 3406832 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 145284 kB' 'Mapped: 74312 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 310264 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 99288 kB' 'KernelStack: 4492 kB' 'PageTables: 2892 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5083052 kB' 'Committed_AS: 712424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14272 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.290 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.290 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.291 07:04:59 -- setup/common.sh@33 -- # echo 0 00:04:26.291 07:04:59 -- setup/common.sh@33 -- # return 0 00:04:26.291 07:04:59 -- setup/hugepages.sh@100 -- # resv=0 00:04:26.291 nr_hugepages=1024 00:04:26.291 resv_hugepages=0 00:04:26.291 07:04:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.291 07:04:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.291 surplus_hugepages=0 00:04:26.291 anon_hugepages=0 00:04:26.291 07:04:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.291 07:04:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.291 07:04:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.291 07:04:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.291 07:04:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.291 07:04:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.291 07:04:59 -- setup/common.sh@18 -- # local node= 00:04:26.291 07:04:59 -- setup/common.sh@19 -- # local var val 00:04:26.291 07:04:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.291 07:04:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.291 07:04:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.291 07:04:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.291 07:04:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.291 07:04:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5146768 kB' 'MemAvailable: 9516596 kB' 'Buffers: 38148 kB' 'Cached: 4448908 kB' 'SwapCached: 0 kB' 'Active: 1204616 kB' 'Inactive: 3408620 kB' 'Active(anon): 130676 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073940 kB' 'Inactive(file): 3406828 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'SwapTotal: 0 kB' 'SwapFree: 0 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'AnonPages: 144620 kB' 'Mapped: 74408 kB' 'Shmem: 2628 kB' 'KReclaimable: 210976 kB' 'Slab: 310112 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 99136 kB' 'KernelStack: 4472 kB' 'PageTables: 2892 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 5083052 kB' 'Committed_AS: 711600 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 14272 kB' 'VmallocChunk: 0 kB' 'Percpu: 11136 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 161644 kB' 'DirectMap2M: 4032512 kB' 'DirectMap1G: 10485760 kB' 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.291 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.291 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.292 07:04:59 -- setup/common.sh@33 -- # echo 1024 00:04:26.292 07:04:59 -- setup/common.sh@33 -- # return 0 00:04:26.292 07:04:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.292 07:04:59 -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.292 07:04:59 -- setup/hugepages.sh@27 -- # local node 00:04:26.292 07:04:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.292 07:04:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:26.292 07:04:59 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:26.292 07:04:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.292 07:04:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.292 07:04:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.292 07:04:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.292 07:04:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.292 07:04:59 -- setup/common.sh@18 -- # local node=0 00:04:26.292 07:04:59 -- setup/common.sh@19 -- # local var val 00:04:26.292 07:04:59 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.292 07:04:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.292 07:04:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.292 07:04:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.292 07:04:59 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.292 07:04:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12263256 kB' 'MemFree: 5146776 kB' 'MemUsed: 7116480 kB' 'Active: 1204260 kB' 'Inactive: 3408620 kB' 'Active(anon): 130320 kB' 'Inactive(anon): 1792 kB' 'Active(file): 1073940 kB' 'Inactive(file): 3406828 kB' 'Unevictable: 18500 kB' 'Mlocked: 18500 kB' 'Dirty: 676 kB' 'Writeback: 0 kB' 'FilePages: 4487056 kB' 'Mapped: 74408 kB' 'AnonPages: 144332 kB' 'Shmem: 2628 kB' 'KernelStack: 4472 kB' 'PageTables: 3256 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 210976 kB' 'Slab: 310112 kB' 'SReclaimable: 210976 kB' 'SUnreclaim: 99136 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.292 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.292 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # continue 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.293 07:04:59 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.293 07:04:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.293 07:04:59 -- setup/common.sh@33 -- # echo 0 00:04:26.293 07:04:59 -- setup/common.sh@33 -- # return 0 00:04:26.293 07:04:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.293 07:04:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.293 07:04:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.293 07:04:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.293 node0=1024 expecting 1024 00:04:26.293 07:04:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:26.293 07:04:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:26.293 00:04:26.293 real 0m1.301s 00:04:26.293 user 0m0.488s 00:04:26.293 sys 0m0.864s 00:04:26.293 07:04:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:26.293 07:04:59 -- common/autotest_common.sh@10 -- # set +x 00:04:26.293 ************************************ 00:04:26.293 END TEST no_shrink_alloc 00:04:26.293 ************************************ 00:04:26.293 07:04:59 -- setup/hugepages.sh@217 -- # clear_hp 00:04:26.293 07:04:59 -- setup/hugepages.sh@37 -- # local node hp 00:04:26.293 07:04:59 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:26.293 07:04:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.293 07:04:59 -- setup/hugepages.sh@41 -- # echo 0 00:04:26.293 07:04:59 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:26.293 07:04:59 -- setup/hugepages.sh@41 -- # echo 0 00:04:26.293 07:04:59 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:26.293 07:04:59 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:26.293 ************************************ 00:04:26.293 END TEST hugepages 00:04:26.293 ************************************ 00:04:26.293 00:04:26.293 real 0m6.211s 00:04:26.293 user 0m1.985s 00:04:26.293 sys 0m4.317s 00:04:26.293 07:04:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:26.293 07:04:59 -- common/autotest_common.sh@10 -- # set +x 00:04:26.553 07:04:59 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:26.553 07:04:59 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:26.553 07:04:59 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:26.553 07:04:59 -- common/autotest_common.sh@10 -- # set +x 00:04:26.553 ************************************ 00:04:26.553 START TEST driver 00:04:26.553 ************************************ 00:04:26.553 07:04:59 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:26.553 * Looking for test storage... 00:04:26.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:26.553 07:05:00 -- setup/driver.sh@68 -- # setup reset 00:04:26.553 07:05:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.553 07:05:00 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:27.121 07:05:00 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:27.121 07:05:00 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:27.121 07:05:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:27.121 07:05:00 -- common/autotest_common.sh@10 -- # set +x 00:04:27.121 ************************************ 00:04:27.121 START TEST guess_driver 00:04:27.121 ************************************ 00:04:27.121 07:05:00 -- common/autotest_common.sh@1102 -- # guess_driver 00:04:27.121 07:05:00 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:27.121 07:05:00 -- setup/driver.sh@47 -- # local fail=0 00:04:27.121 07:05:00 -- setup/driver.sh@49 -- # pick_driver 00:04:27.121 07:05:00 -- setup/driver.sh@36 -- # vfio 00:04:27.121 07:05:00 -- setup/driver.sh@21 -- # local iommu_grups 00:04:27.121 07:05:00 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:27.121 07:05:00 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:27.121 07:05:00 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:27.121 07:05:00 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:27.121 07:05:00 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:27.121 07:05:00 -- setup/driver.sh@29 -- # [[ N == Y ]] 00:04:27.121 07:05:00 -- setup/driver.sh@32 -- # return 1 00:04:27.121 07:05:00 -- setup/driver.sh@38 -- # uio 00:04:27.121 07:05:00 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:27.121 07:05:00 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:27.121 07:05:00 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:27.121 07:05:00 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:27.121 07:05:00 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/5.4.0-144-generic/kernel/drivers/uio/uio.ko 00:04:27.121 insmod /lib/modules/5.4.0-144-generic/kernel/drivers/uio/uio_pci_generic.ko == *\.\k\o* ]] 00:04:27.121 07:05:00 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:27.121 07:05:00 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:27.121 Looking for driver=uio_pci_generic 00:04:27.121 07:05:00 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:27.121 07:05:00 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:27.121 07:05:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.121 07:05:00 -- setup/driver.sh@45 -- # setup output config 00:04:27.121 07:05:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.121 07:05:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:27.380 07:05:00 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:27.381 07:05:00 -- setup/driver.sh@58 -- # continue 00:04:27.381 07:05:00 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:27.381 07:05:01 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:27.381 07:05:01 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:27.381 07:05:01 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.759 07:05:02 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:28.759 07:05:02 -- setup/driver.sh@65 -- # setup reset 00:04:28.759 07:05:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:28.759 07:05:02 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:29.018 00:04:29.018 real 0m2.053s 00:04:29.018 user 0m0.520s 00:04:29.018 sys 0m1.459s 00:04:29.018 07:05:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.018 ************************************ 00:04:29.018 END TEST guess_driver 00:04:29.018 ************************************ 00:04:29.018 07:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:29.018 00:04:29.018 real 0m2.619s 00:04:29.018 user 0m0.817s 00:04:29.018 sys 0m1.728s 00:04:29.018 ************************************ 00:04:29.018 END TEST driver 00:04:29.018 ************************************ 00:04:29.018 07:05:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.018 07:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:29.018 07:05:02 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:29.018 07:05:02 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:29.018 07:05:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:29.018 07:05:02 -- common/autotest_common.sh@10 -- # set +x 00:04:29.018 ************************************ 00:04:29.018 START TEST devices 00:04:29.018 ************************************ 00:04:29.018 07:05:02 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:29.278 * Looking for test storage... 00:04:29.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:29.278 07:05:02 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:29.278 07:05:02 -- setup/devices.sh@192 -- # setup reset 00:04:29.278 07:05:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.278 07:05:02 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:29.537 07:05:03 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:29.537 07:05:03 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:04:29.537 07:05:03 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:04:29.537 07:05:03 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:04:29.537 07:05:03 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:04:29.537 07:05:03 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:04:29.537 07:05:03 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:04:29.537 07:05:03 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:29.537 07:05:03 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:04:29.537 07:05:03 -- setup/devices.sh@196 -- # blocks=() 00:04:29.537 07:05:03 -- setup/devices.sh@196 -- # declare -a blocks 00:04:29.537 07:05:03 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:29.537 07:05:03 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:29.537 07:05:03 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:29.537 07:05:03 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:29.537 07:05:03 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:29.537 07:05:03 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:29.537 07:05:03 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:29.537 07:05:03 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:29.537 07:05:03 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:29.537 07:05:03 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:29.537 07:05:03 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:29.537 No valid GPT data, bailing 00:04:29.537 07:05:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:29.537 07:05:03 -- scripts/common.sh@393 -- # pt= 00:04:29.537 07:05:03 -- scripts/common.sh@394 -- # return 1 00:04:29.537 07:05:03 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:29.537 07:05:03 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:29.537 07:05:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:29.537 07:05:03 -- setup/common.sh@80 -- # echo 5368709120 00:04:29.537 07:05:03 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:29.537 07:05:03 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:29.537 07:05:03 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:29.537 07:05:03 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:29.537 07:05:03 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:29.537 07:05:03 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:29.537 07:05:03 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:29.537 07:05:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:29.537 07:05:03 -- common/autotest_common.sh@10 -- # set +x 00:04:29.796 ************************************ 00:04:29.796 START TEST nvme_mount 00:04:29.796 ************************************ 00:04:29.796 07:05:03 -- common/autotest_common.sh@1102 -- # nvme_mount 00:04:29.796 07:05:03 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:29.796 07:05:03 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:29.796 07:05:03 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.796 07:05:03 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:29.796 07:05:03 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:29.796 07:05:03 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:29.796 07:05:03 -- setup/common.sh@40 -- # local part_no=1 00:04:29.796 07:05:03 -- setup/common.sh@41 -- # local size=1073741824 00:04:29.796 07:05:03 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:29.796 07:05:03 -- setup/common.sh@44 -- # parts=() 00:04:29.796 07:05:03 -- setup/common.sh@44 -- # local parts 00:04:29.796 07:05:03 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:29.796 07:05:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.796 07:05:03 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:29.796 07:05:03 -- setup/common.sh@46 -- # (( part++ )) 00:04:29.796 07:05:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:29.796 07:05:03 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:29.796 07:05:03 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:29.796 07:05:03 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:30.733 Creating new GPT entries in memory. 00:04:30.733 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:30.733 other utilities. 00:04:30.733 07:05:04 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:30.733 07:05:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:30.733 07:05:04 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:30.733 07:05:04 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:30.733 07:05:04 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:31.667 Creating new GPT entries in memory. 00:04:31.667 The operation has completed successfully. 00:04:31.667 07:05:05 -- setup/common.sh@57 -- # (( part++ )) 00:04:31.667 07:05:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.667 07:05:05 -- setup/common.sh@62 -- # wait 100364 00:04:31.925 07:05:05 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.925 07:05:05 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:31.925 07:05:05 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.925 07:05:05 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:31.925 07:05:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:31.925 07:05:05 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.926 07:05:05 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.926 07:05:05 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:31.926 07:05:05 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:31.926 07:05:05 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.926 07:05:05 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:31.926 07:05:05 -- setup/devices.sh@53 -- # local found=0 00:04:31.926 07:05:05 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.926 07:05:05 -- setup/devices.sh@56 -- # : 00:04:31.926 07:05:05 -- setup/devices.sh@59 -- # local pci status 00:04:31.926 07:05:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.926 07:05:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:31.926 07:05:05 -- setup/devices.sh@47 -- # setup output config 00:04:31.926 07:05:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.926 07:05:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:31.926 07:05:05 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.926 07:05:05 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:31.926 07:05:05 -- setup/devices.sh@63 -- # found=1 00:04:31.926 07:05:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.926 07:05:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:31.926 07:05:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:32.185 07:05:05 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:32.185 07:05:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.565 07:05:06 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.565 07:05:06 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:33.565 07:05:06 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.565 07:05:06 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.565 07:05:06 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:33.565 07:05:06 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:33.565 07:05:06 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.565 07:05:06 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.565 07:05:06 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.565 07:05:06 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:33.565 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:33.565 07:05:06 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.565 07:05:06 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:33.565 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:33.565 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:33.565 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:33.565 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:33.565 07:05:06 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:33.565 07:05:06 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:33.565 07:05:06 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.565 07:05:06 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:33.565 07:05:06 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:33.565 07:05:06 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.565 07:05:06 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:33.565 07:05:06 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:33.565 07:05:06 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:33.565 07:05:06 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:33.565 07:05:06 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:33.565 07:05:06 -- setup/devices.sh@53 -- # local found=0 00:04:33.565 07:05:06 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.565 07:05:06 -- setup/devices.sh@56 -- # : 00:04:33.565 07:05:06 -- setup/devices.sh@59 -- # local pci status 00:04:33.565 07:05:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.565 07:05:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:33.565 07:05:06 -- setup/devices.sh@47 -- # setup output config 00:04:33.565 07:05:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.565 07:05:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:33.565 07:05:07 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:33.565 07:05:07 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:33.565 07:05:07 -- setup/devices.sh@63 -- # found=1 00:04:33.565 07:05:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.565 07:05:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:33.565 07:05:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.565 07:05:07 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:33.565 07:05:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.946 07:05:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.946 07:05:08 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:34.946 07:05:08 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.946 07:05:08 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.946 07:05:08 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:34.946 07:05:08 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:34.946 07:05:08 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:04:34.946 07:05:08 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:34.946 07:05:08 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:34.946 07:05:08 -- setup/devices.sh@50 -- # local mount_point= 00:04:34.946 07:05:08 -- setup/devices.sh@51 -- # local test_file= 00:04:34.946 07:05:08 -- setup/devices.sh@53 -- # local found=0 00:04:34.946 07:05:08 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:34.946 07:05:08 -- setup/devices.sh@59 -- # local pci status 00:04:34.946 07:05:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.946 07:05:08 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:34.946 07:05:08 -- setup/devices.sh@47 -- # setup output config 00:04:34.946 07:05:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.946 07:05:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:35.205 07:05:08 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:35.205 07:05:08 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:35.205 07:05:08 -- setup/devices.sh@63 -- # found=1 00:04:35.205 07:05:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.205 07:05:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:35.205 07:05:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.205 07:05:08 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:35.205 07:05:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.629 07:05:10 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:36.629 07:05:10 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:36.629 07:05:10 -- setup/devices.sh@68 -- # return 0 00:04:36.629 07:05:10 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:36.629 07:05:10 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:36.629 07:05:10 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:36.629 07:05:10 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:36.629 07:05:10 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:36.629 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:36.629 ************************************ 00:04:36.629 END TEST nvme_mount 00:04:36.629 ************************************ 00:04:36.629 00:04:36.629 real 0m6.944s 00:04:36.629 user 0m0.717s 00:04:36.629 sys 0m4.100s 00:04:36.629 07:05:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:36.629 07:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.629 07:05:10 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:36.630 07:05:10 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:36.630 07:05:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:36.630 07:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:36.630 ************************************ 00:04:36.630 START TEST dm_mount 00:04:36.630 ************************************ 00:04:36.630 07:05:10 -- common/autotest_common.sh@1102 -- # dm_mount 00:04:36.630 07:05:10 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:36.630 07:05:10 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:36.630 07:05:10 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:36.630 07:05:10 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:36.630 07:05:10 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:36.630 07:05:10 -- setup/common.sh@40 -- # local part_no=2 00:04:36.630 07:05:10 -- setup/common.sh@41 -- # local size=1073741824 00:04:36.630 07:05:10 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:36.630 07:05:10 -- setup/common.sh@44 -- # parts=() 00:04:36.630 07:05:10 -- setup/common.sh@44 -- # local parts 00:04:36.630 07:05:10 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:36.630 07:05:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.630 07:05:10 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:36.630 07:05:10 -- setup/common.sh@46 -- # (( part++ )) 00:04:36.630 07:05:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.630 07:05:10 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:36.630 07:05:10 -- setup/common.sh@46 -- # (( part++ )) 00:04:36.630 07:05:10 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:36.630 07:05:10 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:36.630 07:05:10 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:36.630 07:05:10 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:38.004 Creating new GPT entries in memory. 00:04:38.004 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:38.004 other utilities. 00:04:38.004 07:05:11 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:38.004 07:05:11 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.004 07:05:11 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:38.004 07:05:11 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:38.004 07:05:11 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:38.940 Creating new GPT entries in memory. 00:04:38.940 The operation has completed successfully. 00:04:38.940 07:05:12 -- setup/common.sh@57 -- # (( part++ )) 00:04:38.940 07:05:12 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.940 07:05:12 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:38.940 07:05:12 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:38.940 07:05:12 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:39.878 The operation has completed successfully. 00:04:39.878 07:05:13 -- setup/common.sh@57 -- # (( part++ )) 00:04:39.878 07:05:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.878 07:05:13 -- setup/common.sh@62 -- # wait 100858 00:04:39.878 07:05:13 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:39.878 07:05:13 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:39.878 07:05:13 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:39.878 07:05:13 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:39.878 07:05:13 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:39.878 07:05:13 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.878 07:05:13 -- setup/devices.sh@161 -- # break 00:04:39.878 07:05:13 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.878 07:05:13 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:39.878 07:05:13 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:39.878 07:05:13 -- setup/devices.sh@166 -- # dm=dm-0 00:04:39.878 07:05:13 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:39.878 07:05:13 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:39.878 07:05:13 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:39.878 07:05:13 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:39.878 07:05:13 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:39.878 07:05:13 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:39.878 07:05:13 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:39.878 07:05:13 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:39.878 07:05:13 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:39.878 07:05:13 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:39.878 07:05:13 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:39.878 07:05:13 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:39.878 07:05:13 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:39.878 07:05:13 -- setup/devices.sh@53 -- # local found=0 00:04:39.878 07:05:13 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:39.878 07:05:13 -- setup/devices.sh@56 -- # : 00:04:39.878 07:05:13 -- setup/devices.sh@59 -- # local pci status 00:04:39.878 07:05:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.878 07:05:13 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:39.878 07:05:13 -- setup/devices.sh@47 -- # setup output config 00:04:39.878 07:05:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.878 07:05:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:40.137 07:05:13 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:40.137 07:05:13 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:40.137 07:05:13 -- setup/devices.sh@63 -- # found=1 00:04:40.137 07:05:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.137 07:05:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:40.137 07:05:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.395 07:05:13 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:40.395 07:05:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.774 07:05:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:41.774 07:05:15 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:41.774 07:05:15 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:41.774 07:05:15 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:41.774 07:05:15 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:41.774 07:05:15 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:41.774 07:05:15 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:41.774 07:05:15 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:04:41.774 07:05:15 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:41.774 07:05:15 -- setup/devices.sh@50 -- # local mount_point= 00:04:41.774 07:05:15 -- setup/devices.sh@51 -- # local test_file= 00:04:41.774 07:05:15 -- setup/devices.sh@53 -- # local found=0 00:04:41.774 07:05:15 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:41.774 07:05:15 -- setup/devices.sh@59 -- # local pci status 00:04:41.774 07:05:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.774 07:05:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:04:41.774 07:05:15 -- setup/devices.sh@47 -- # setup output config 00:04:41.774 07:05:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.774 07:05:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:41.774 07:05:15 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:41.774 07:05:15 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:41.774 07:05:15 -- setup/devices.sh@63 -- # found=1 00:04:41.774 07:05:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.774 07:05:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:41.774 07:05:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.033 07:05:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:04:42.033 07:05:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.408 07:05:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.408 07:05:16 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:43.408 07:05:16 -- setup/devices.sh@68 -- # return 0 00:04:43.408 07:05:16 -- setup/devices.sh@187 -- # cleanup_dm 00:04:43.408 07:05:16 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:43.408 07:05:16 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:43.408 07:05:16 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:43.408 07:05:16 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.408 07:05:16 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:43.408 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:43.408 07:05:16 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:43.408 07:05:16 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:43.408 00:04:43.408 real 0m6.550s 00:04:43.408 user 0m0.463s 00:04:43.408 sys 0m2.840s 00:04:43.408 ************************************ 00:04:43.408 07:05:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.408 07:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:43.408 END TEST dm_mount 00:04:43.408 ************************************ 00:04:43.408 07:05:16 -- setup/devices.sh@1 -- # cleanup 00:04:43.408 07:05:16 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:43.408 07:05:16 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:43.408 07:05:16 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.408 07:05:16 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:43.408 07:05:16 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:43.408 07:05:16 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:43.408 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:43.408 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:43.408 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:43.408 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:43.408 07:05:16 -- setup/devices.sh@12 -- # cleanup_dm 00:04:43.408 07:05:16 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:43.408 07:05:16 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:43.408 07:05:16 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:43.408 07:05:16 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:43.408 07:05:16 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:43.408 07:05:16 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:43.408 00:04:43.408 real 0m14.272s 00:04:43.408 user 0m1.600s 00:04:43.408 sys 0m7.241s 00:04:43.408 07:05:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.408 07:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:43.408 ************************************ 00:04:43.408 END TEST devices 00:04:43.408 ************************************ 00:04:43.408 ************************************ 00:04:43.408 END TEST setup.sh 00:04:43.408 ************************************ 00:04:43.408 00:04:43.408 real 0m28.607s 00:04:43.408 user 0m6.100s 00:04:43.408 sys 0m17.083s 00:04:43.408 07:05:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.408 07:05:16 -- common/autotest_common.sh@10 -- # set +x 00:04:43.408 07:05:16 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:43.666 Hugepages 00:04:43.667 node hugesize free / total 00:04:43.667 node0 1048576kB 0 / 0 00:04:43.667 node0 2048kB 2048 / 2048 00:04:43.667 00:04:43.667 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:43.667 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:43.667 NVMe 0000:00:06.0 1b36 0010 0 nvme nvme0 nvme0n1 00:04:43.667 07:05:17 -- spdk/autotest.sh@141 -- # uname -s 00:04:43.667 07:05:17 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:43.667 07:05:17 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:43.667 07:05:17 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:44.233 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:04:44.233 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.609 07:05:18 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:46.545 07:05:19 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:46.545 07:05:19 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:46.545 07:05:19 -- common/autotest_common.sh@1517 -- # bdfs=($(get_nvme_bdfs)) 00:04:46.545 07:05:19 -- common/autotest_common.sh@1517 -- # get_nvme_bdfs 00:04:46.545 07:05:19 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:46.545 07:05:19 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:46.545 07:05:19 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:46.545 07:05:19 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:46.545 07:05:19 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:46.545 07:05:20 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:46.545 07:05:20 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 00:04:46.545 07:05:20 -- common/autotest_common.sh@1519 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:46.804 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:04:46.804 Waiting for block devices as requested 00:04:46.804 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:47.063 07:05:20 -- common/autotest_common.sh@1521 -- # for bdf in "${bdfs[@]}" 00:04:47.063 07:05:20 -- common/autotest_common.sh@1522 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:47.063 07:05:20 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:47.063 07:05:20 -- common/autotest_common.sh@1485 -- # grep 0000:00:06.0/nvme/nvme 00:04:47.063 07:05:20 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:47.063 07:05:20 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:04:47.063 07:05:20 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:04:47.063 07:05:20 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:47.063 07:05:20 -- common/autotest_common.sh@1522 -- # nvme_ctrlr=/dev/nvme0 00:04:47.063 07:05:20 -- common/autotest_common.sh@1523 -- # [[ -z /dev/nvme0 ]] 00:04:47.063 07:05:20 -- common/autotest_common.sh@1528 -- # nvme id-ctrl /dev/nvme0 00:04:47.063 07:05:20 -- common/autotest_common.sh@1528 -- # grep oacs 00:04:47.063 07:05:20 -- common/autotest_common.sh@1528 -- # cut -d: -f2 00:04:47.063 07:05:20 -- common/autotest_common.sh@1528 -- # oacs=' 0x12a' 00:04:47.063 07:05:20 -- common/autotest_common.sh@1529 -- # oacs_ns_manage=8 00:04:47.063 07:05:20 -- common/autotest_common.sh@1531 -- # [[ 8 -ne 0 ]] 00:04:47.063 07:05:20 -- common/autotest_common.sh@1537 -- # nvme id-ctrl /dev/nvme0 00:04:47.063 07:05:20 -- common/autotest_common.sh@1537 -- # grep unvmcap 00:04:47.063 07:05:20 -- common/autotest_common.sh@1537 -- # cut -d: -f2 00:04:47.063 07:05:20 -- common/autotest_common.sh@1537 -- # unvmcap=' 0' 00:04:47.063 07:05:20 -- common/autotest_common.sh@1538 -- # [[ 0 -eq 0 ]] 00:04:47.063 07:05:20 -- common/autotest_common.sh@1540 -- # continue 00:04:47.063 07:05:20 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:47.063 07:05:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:47.063 07:05:20 -- common/autotest_common.sh@10 -- # set +x 00:04:47.063 07:05:20 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:47.063 07:05:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:47.063 07:05:20 -- common/autotest_common.sh@10 -- # set +x 00:04:47.063 07:05:20 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.322 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:04:47.580 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.006 07:05:22 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:49.006 07:05:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:49.006 07:05:22 -- common/autotest_common.sh@10 -- # set +x 00:04:49.006 07:05:22 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:49.006 07:05:22 -- common/autotest_common.sh@1574 -- # mapfile -t bdfs 00:04:49.006 07:05:22 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs_by_id 0x0a54 00:04:49.006 07:05:22 -- common/autotest_common.sh@1560 -- # bdfs=() 00:04:49.006 07:05:22 -- common/autotest_common.sh@1560 -- # local bdfs 00:04:49.006 07:05:22 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:49.006 07:05:22 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:49.006 07:05:22 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:49.006 07:05:22 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:49.006 07:05:22 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:49.006 07:05:22 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:49.006 07:05:22 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:49.006 07:05:22 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 00:04:49.006 07:05:22 -- common/autotest_common.sh@1562 -- # for bdf in $(get_nvme_bdfs) 00:04:49.006 07:05:22 -- common/autotest_common.sh@1563 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:49.006 07:05:22 -- common/autotest_common.sh@1563 -- # device=0x0010 00:04:49.006 07:05:22 -- common/autotest_common.sh@1564 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:49.006 07:05:22 -- common/autotest_common.sh@1569 -- # printf '%s\n' 00:04:49.006 07:05:22 -- common/autotest_common.sh@1575 -- # [[ -z '' ]] 00:04:49.006 07:05:22 -- common/autotest_common.sh@1576 -- # return 0 00:04:49.006 07:05:22 -- spdk/autotest.sh@161 -- # '[' 1 -eq 1 ']' 00:04:49.006 07:05:22 -- spdk/autotest.sh@162 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:49.006 07:05:22 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:04:49.006 07:05:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:04:49.006 07:05:22 -- common/autotest_common.sh@10 -- # set +x 00:04:49.006 ************************************ 00:04:49.006 START TEST unittest 00:04:49.006 ************************************ 00:04:49.006 07:05:22 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:49.006 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:49.006 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:04:49.006 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:04:49.006 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:04:49.006 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:04:49.006 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:49.006 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:04:49.006 ++ rpc_py=rpc_cmd 00:04:49.006 ++ set -e 00:04:49.006 ++ shopt -s nullglob 00:04:49.006 ++ shopt -s extglob 00:04:49.006 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:04:49.006 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:04:49.006 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:04:49.006 +++ CONFIG_FIO_PLUGIN=y 00:04:49.006 +++ CONFIG_NVME_CUSE=y 00:04:49.006 +++ CONFIG_RAID5F=y 00:04:49.006 +++ CONFIG_LTO=n 00:04:49.006 +++ CONFIG_SMA=n 00:04:49.006 +++ CONFIG_ISAL=y 00:04:49.006 +++ CONFIG_OPENSSL_PATH= 00:04:49.006 +++ CONFIG_IDXD_KERNEL=n 00:04:49.006 +++ CONFIG_URING_PATH= 00:04:49.006 +++ CONFIG_DAOS=n 00:04:49.006 +++ CONFIG_DPDK_LIB_DIR= 00:04:49.006 +++ CONFIG_OCF=n 00:04:49.006 +++ CONFIG_EXAMPLES=y 00:04:49.006 +++ CONFIG_RDMA_PROV=verbs 00:04:49.006 +++ CONFIG_ISCSI_INITIATOR=y 00:04:49.006 +++ CONFIG_VTUNE=n 00:04:49.006 +++ CONFIG_DPDK_INC_DIR= 00:04:49.006 +++ CONFIG_CET=n 00:04:49.006 +++ CONFIG_TESTS=y 00:04:49.006 +++ CONFIG_APPS=y 00:04:49.006 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:04:49.006 +++ CONFIG_DAOS_DIR= 00:04:49.006 +++ CONFIG_CRYPTO_MLX5=n 00:04:49.006 +++ CONFIG_XNVME=n 00:04:49.006 +++ CONFIG_UNIT_TESTS=y 00:04:49.006 +++ CONFIG_FUSE=n 00:04:49.006 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:04:49.006 +++ CONFIG_OCF_PATH= 00:04:49.006 +++ CONFIG_WPDK_DIR= 00:04:49.006 +++ CONFIG_VFIO_USER=n 00:04:49.006 +++ CONFIG_MAX_LCORES= 00:04:49.006 +++ CONFIG_ARCH=native 00:04:49.006 +++ CONFIG_TSAN=n 00:04:49.006 +++ CONFIG_VIRTIO=y 00:04:49.006 +++ CONFIG_IPSEC_MB=n 00:04:49.006 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:04:49.006 +++ CONFIG_ASAN=y 00:04:49.006 +++ CONFIG_SHARED=n 00:04:49.006 +++ CONFIG_VTUNE_DIR= 00:04:49.006 +++ CONFIG_RDMA_SET_TOS=y 00:04:49.006 +++ CONFIG_VBDEV_COMPRESS=n 00:04:49.006 +++ CONFIG_VFIO_USER_DIR= 00:04:49.006 +++ CONFIG_FUZZER_LIB= 00:04:49.006 +++ CONFIG_HAVE_EXECINFO_H=y 00:04:49.006 +++ CONFIG_USDT=n 00:04:49.006 +++ CONFIG_URING_ZNS=n 00:04:49.006 +++ CONFIG_FC_PATH= 00:04:49.006 +++ CONFIG_COVERAGE=y 00:04:49.006 +++ CONFIG_CUSTOMOCF=n 00:04:49.006 +++ CONFIG_DPDK_PKG_CONFIG=n 00:04:49.006 +++ CONFIG_WERROR=y 00:04:49.006 +++ CONFIG_DEBUG=y 00:04:49.006 +++ CONFIG_RDMA=y 00:04:49.006 +++ CONFIG_HAVE_ARC4RANDOM=n 00:04:49.006 +++ CONFIG_FUZZER=n 00:04:49.006 +++ CONFIG_FC=n 00:04:49.006 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:04:49.006 +++ CONFIG_HAVE_LIBARCHIVE=n 00:04:49.006 +++ CONFIG_DPDK_COMPRESSDEV=n 00:04:49.006 +++ CONFIG_CROSS_PREFIX= 00:04:49.006 +++ CONFIG_PREFIX=/usr/local 00:04:49.006 +++ CONFIG_HAVE_LIBBSD=n 00:04:49.006 +++ CONFIG_UBSAN=y 00:04:49.007 +++ CONFIG_PGO_CAPTURE=n 00:04:49.007 +++ CONFIG_UBLK=n 00:04:49.007 +++ CONFIG_ISAL_CRYPTO=y 00:04:49.007 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:04:49.007 +++ CONFIG_CRYPTO=n 00:04:49.007 +++ CONFIG_RBD=n 00:04:49.007 +++ CONFIG_LIBDIR= 00:04:49.007 +++ CONFIG_IPSEC_MB_DIR= 00:04:49.007 +++ CONFIG_PGO_USE=n 00:04:49.007 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:49.007 +++ CONFIG_GOLANG=n 00:04:49.007 +++ CONFIG_VHOST=y 00:04:49.007 +++ CONFIG_IDXD=y 00:04:49.007 +++ CONFIG_AVAHI=n 00:04:49.007 +++ CONFIG_URING=n 00:04:49.007 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:49.007 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:04:49.007 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:04:49.007 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:04:49.007 +++ _root=/home/vagrant/spdk_repo/spdk 00:04:49.007 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:04:49.007 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:04:49.007 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:04:49.007 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:04:49.007 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:04:49.007 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:04:49.007 +++ VHOST_APP=("$_app_dir/vhost") 00:04:49.007 +++ DD_APP=("$_app_dir/spdk_dd") 00:04:49.007 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:04:49.007 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:04:49.007 +++ [[ #ifndef SPDK_CONFIG_H 00:04:49.007 #define SPDK_CONFIG_H 00:04:49.007 #define SPDK_CONFIG_APPS 1 00:04:49.007 #define SPDK_CONFIG_ARCH native 00:04:49.007 #define SPDK_CONFIG_ASAN 1 00:04:49.007 #undef SPDK_CONFIG_AVAHI 00:04:49.007 #undef SPDK_CONFIG_CET 00:04:49.007 #define SPDK_CONFIG_COVERAGE 1 00:04:49.007 #define SPDK_CONFIG_CROSS_PREFIX 00:04:49.007 #undef SPDK_CONFIG_CRYPTO 00:04:49.007 #undef SPDK_CONFIG_CRYPTO_MLX5 00:04:49.007 #undef SPDK_CONFIG_CUSTOMOCF 00:04:49.007 #undef SPDK_CONFIG_DAOS 00:04:49.007 #define SPDK_CONFIG_DAOS_DIR 00:04:49.007 #define SPDK_CONFIG_DEBUG 1 00:04:49.007 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:04:49.007 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:04:49.007 #define SPDK_CONFIG_DPDK_INC_DIR 00:04:49.007 #define SPDK_CONFIG_DPDK_LIB_DIR 00:04:49.007 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:04:49.007 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:04:49.007 #define SPDK_CONFIG_EXAMPLES 1 00:04:49.007 #undef SPDK_CONFIG_FC 00:04:49.007 #define SPDK_CONFIG_FC_PATH 00:04:49.007 #define SPDK_CONFIG_FIO_PLUGIN 1 00:04:49.007 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:04:49.007 #undef SPDK_CONFIG_FUSE 00:04:49.007 #undef SPDK_CONFIG_FUZZER 00:04:49.007 #define SPDK_CONFIG_FUZZER_LIB 00:04:49.007 #undef SPDK_CONFIG_GOLANG 00:04:49.007 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:04:49.007 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:04:49.007 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:04:49.007 #undef SPDK_CONFIG_HAVE_LIBBSD 00:04:49.007 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:04:49.007 #define SPDK_CONFIG_IDXD 1 00:04:49.007 #undef SPDK_CONFIG_IDXD_KERNEL 00:04:49.007 #undef SPDK_CONFIG_IPSEC_MB 00:04:49.007 #define SPDK_CONFIG_IPSEC_MB_DIR 00:04:49.007 #define SPDK_CONFIG_ISAL 1 00:04:49.007 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:04:49.007 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:04:49.007 #define SPDK_CONFIG_LIBDIR 00:04:49.007 #undef SPDK_CONFIG_LTO 00:04:49.007 #define SPDK_CONFIG_MAX_LCORES 00:04:49.007 #define SPDK_CONFIG_NVME_CUSE 1 00:04:49.007 #undef SPDK_CONFIG_OCF 00:04:49.007 #define SPDK_CONFIG_OCF_PATH 00:04:49.007 #define SPDK_CONFIG_OPENSSL_PATH 00:04:49.007 #undef SPDK_CONFIG_PGO_CAPTURE 00:04:49.007 #undef SPDK_CONFIG_PGO_USE 00:04:49.007 #define SPDK_CONFIG_PREFIX /usr/local 00:04:49.007 #define SPDK_CONFIG_RAID5F 1 00:04:49.007 #undef SPDK_CONFIG_RBD 00:04:49.007 #define SPDK_CONFIG_RDMA 1 00:04:49.007 #define SPDK_CONFIG_RDMA_PROV verbs 00:04:49.007 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:04:49.007 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:04:49.007 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:04:49.007 #undef SPDK_CONFIG_SHARED 00:04:49.007 #undef SPDK_CONFIG_SMA 00:04:49.007 #define SPDK_CONFIG_TESTS 1 00:04:49.007 #undef SPDK_CONFIG_TSAN 00:04:49.007 #undef SPDK_CONFIG_UBLK 00:04:49.007 #define SPDK_CONFIG_UBSAN 1 00:04:49.007 #define SPDK_CONFIG_UNIT_TESTS 1 00:04:49.007 #undef SPDK_CONFIG_URING 00:04:49.007 #define SPDK_CONFIG_URING_PATH 00:04:49.007 #undef SPDK_CONFIG_URING_ZNS 00:04:49.007 #undef SPDK_CONFIG_USDT 00:04:49.007 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:04:49.007 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:04:49.007 #undef SPDK_CONFIG_VFIO_USER 00:04:49.007 #define SPDK_CONFIG_VFIO_USER_DIR 00:04:49.007 #define SPDK_CONFIG_VHOST 1 00:04:49.007 #define SPDK_CONFIG_VIRTIO 1 00:04:49.007 #undef SPDK_CONFIG_VTUNE 00:04:49.007 #define SPDK_CONFIG_VTUNE_DIR 00:04:49.007 #define SPDK_CONFIG_WERROR 1 00:04:49.007 #define SPDK_CONFIG_WPDK_DIR 00:04:49.007 #undef SPDK_CONFIG_XNVME 00:04:49.007 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:04:49.007 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:04:49.007 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:49.007 +++ [[ -e /bin/wpdk_common.sh ]] 00:04:49.007 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.007 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:49.007 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:04:49.007 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:49.007 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:04:49.007 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:04:49.007 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:04:49.007 +++ TEST_TAG=N/A 00:04:49.007 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:04:49.007 ++ : 1 00:04:49.007 ++ export RUN_NIGHTLY 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_RUN_VALGRIND 00:04:49.007 ++ : 1 00:04:49.007 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:04:49.007 ++ : 1 00:04:49.007 ++ export SPDK_TEST_UNITTEST 00:04:49.007 ++ : 00:04:49.007 ++ export SPDK_TEST_AUTOBUILD 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_RELEASE_BUILD 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_ISAL 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_ISCSI 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_ISCSI_INITIATOR 00:04:49.007 ++ : 1 00:04:49.007 ++ export SPDK_TEST_NVME 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_NVME_PMR 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_NVME_BP 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_NVME_CLI 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_NVME_CUSE 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_NVME_FDP 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_NVMF 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_VFIOUSER 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_VFIOUSER_QEMU 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_FUZZER 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_FUZZER_SHORT 00:04:49.007 ++ : rdma 00:04:49.007 ++ export SPDK_TEST_NVMF_TRANSPORT 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_RBD 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_VHOST 00:04:49.007 ++ : 1 00:04:49.007 ++ export SPDK_TEST_BLOCKDEV 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_IOAT 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_BLOBFS 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_VHOST_INIT 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_LVOL 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_VBDEV_COMPRESS 00:04:49.007 ++ : 1 00:04:49.007 ++ export SPDK_RUN_ASAN 00:04:49.007 ++ : 1 00:04:49.007 ++ export SPDK_RUN_UBSAN 00:04:49.007 ++ : 00:04:49.007 ++ export SPDK_RUN_EXTERNAL_DPDK 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_RUN_NON_ROOT 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_CRYPTO 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_FTL 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_OCF 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_VMD 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_OPAL 00:04:49.007 ++ : 00:04:49.007 ++ export SPDK_TEST_NATIVE_DPDK 00:04:49.007 ++ : true 00:04:49.007 ++ export SPDK_AUTOTEST_X 00:04:49.007 ++ : 1 00:04:49.007 ++ export SPDK_TEST_RAID5 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_URING 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_USDT 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_USE_IGB_UIO 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_SCHEDULER 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_SCANBUILD 00:04:49.007 ++ : 00:04:49.007 ++ export SPDK_TEST_NVMF_NICS 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_SMA 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_DAOS 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_XNVME 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_ACCEL_DSA 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_ACCEL_IAA 00:04:49.007 ++ : 00:04:49.007 ++ export SPDK_TEST_FUZZER_TARGET 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_TEST_NVMF_MDNS 00:04:49.007 ++ : 0 00:04:49.007 ++ export SPDK_JSONRPC_GO_CLIENT 00:04:49.007 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:49.007 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:04:49.007 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:49.007 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:04:49.007 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:49.008 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:49.008 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:49.008 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:04:49.008 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:04:49.008 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:04:49.008 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:49.008 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:04:49.008 ++ export PYTHONDONTWRITEBYTECODE=1 00:04:49.008 ++ PYTHONDONTWRITEBYTECODE=1 00:04:49.008 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:49.008 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:04:49.008 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:49.008 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:04:49.008 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:04:49.008 ++ rm -rf /var/tmp/asan_suppression_file 00:04:49.008 ++ cat 00:04:49.008 ++ echo leak:libfuse3.so 00:04:49.008 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:49.008 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:04:49.008 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:49.008 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:04:49.008 ++ '[' -z /var/spdk/dependencies ']' 00:04:49.008 ++ export DEPENDENCY_DIR 00:04:49.008 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:49.008 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:04:49.008 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:49.008 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:04:49.008 ++ export QEMU_BIN= 00:04:49.008 ++ QEMU_BIN= 00:04:49.008 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:49.008 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:04:49.008 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:49.008 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:04:49.008 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:49.008 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:49.008 ++ '[' 0 -eq 0 ']' 00:04:49.008 ++ export valgrind= 00:04:49.008 ++ valgrind= 00:04:49.008 +++ uname -s 00:04:49.008 ++ '[' Linux = Linux ']' 00:04:49.008 ++ HUGEMEM=4096 00:04:49.008 ++ export CLEAR_HUGE=yes 00:04:49.008 ++ CLEAR_HUGE=yes 00:04:49.008 ++ [[ 0 -eq 1 ]] 00:04:49.008 ++ [[ 0 -eq 1 ]] 00:04:49.008 ++ MAKE=make 00:04:49.008 +++ nproc 00:04:49.008 ++ MAKEFLAGS=-j10 00:04:49.008 ++ export HUGEMEM=4096 00:04:49.008 ++ HUGEMEM=4096 00:04:49.008 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:04:49.008 ++ NO_HUGE=() 00:04:49.008 ++ TEST_MODE= 00:04:49.008 ++ [[ -z '' ]] 00:04:49.008 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:49.008 ++ exec 00:04:49.008 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:04:49.008 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:04:49.008 ++ set_test_storage 2147483648 00:04:49.008 ++ [[ -v testdir ]] 00:04:49.008 ++ local requested_size=2147483648 00:04:49.008 ++ local mount target_dir 00:04:49.008 ++ local -A mounts fss sizes avails uses 00:04:49.008 ++ local source fs size avail mount use 00:04:49.008 ++ local storage_fallback storage_candidates 00:04:49.008 +++ mktemp -udt spdk.XXXXXX 00:04:49.008 ++ storage_fallback=/tmp/spdk.nGjHIb 00:04:49.008 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:04:49.008 ++ [[ -n '' ]] 00:04:49.008 ++ [[ -n '' ]] 00:04:49.008 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.nGjHIb/tests/unit /tmp/spdk.nGjHIb 00:04:49.267 ++ requested_size=2214592512 00:04:49.267 ++ read -r source fs size use avail _ mount 00:04:49.267 +++ df -T 00:04:49.267 +++ grep -v Filesystem 00:04:49.267 ++ mounts["$mount"]=udev 00:04:49.267 ++ fss["$mount"]=devtmpfs 00:04:49.267 ++ avails["$mount"]=6230982656 00:04:49.267 ++ sizes["$mount"]=6230982656 00:04:49.267 ++ uses["$mount"]=0 00:04:49.267 ++ read -r source fs size use avail _ mount 00:04:49.267 ++ mounts["$mount"]=tmpfs 00:04:49.267 ++ fss["$mount"]=tmpfs 00:04:49.267 ++ avails["$mount"]=1254641664 00:04:49.267 ++ sizes["$mount"]=1255759872 00:04:49.267 ++ uses["$mount"]=1118208 00:04:49.267 ++ read -r source fs size use avail _ mount 00:04:49.267 ++ mounts["$mount"]=/dev/vda1 00:04:49.267 ++ fss["$mount"]=ext4 00:04:49.267 ++ avails["$mount"]=11134148608 00:04:49.267 ++ sizes["$mount"]=20616794112 00:04:49.267 ++ uses["$mount"]=9465868288 00:04:49.267 ++ read -r source fs size use avail _ mount 00:04:49.267 ++ mounts["$mount"]=tmpfs 00:04:49.267 ++ fss["$mount"]=tmpfs 00:04:49.267 ++ avails["$mount"]=6278787072 00:04:49.267 ++ sizes["$mount"]=6278787072 00:04:49.267 ++ uses["$mount"]=0 00:04:49.267 ++ read -r source fs size use avail _ mount 00:04:49.267 ++ mounts["$mount"]=tmpfs 00:04:49.267 ++ fss["$mount"]=tmpfs 00:04:49.267 ++ avails["$mount"]=5242880 00:04:49.267 ++ sizes["$mount"]=5242880 00:04:49.267 ++ uses["$mount"]=0 00:04:49.267 ++ read -r source fs size use avail _ mount 00:04:49.267 ++ mounts["$mount"]=tmpfs 00:04:49.267 ++ fss["$mount"]=tmpfs 00:04:49.267 ++ avails["$mount"]=6278787072 00:04:49.267 ++ sizes["$mount"]=6278787072 00:04:49.267 ++ uses["$mount"]=0 00:04:49.267 ++ read -r source fs size use avail _ mount 00:04:49.267 ++ mounts["$mount"]=/dev/loop0 00:04:49.267 ++ fss["$mount"]=squashfs 00:04:49.267 ++ avails["$mount"]=0 00:04:49.267 ++ sizes["$mount"]=66453504 00:04:49.267 ++ uses["$mount"]=66453504 00:04:49.267 ++ read -r source fs size use avail _ mount 00:04:49.267 ++ mounts["$mount"]=/dev/loop1 00:04:49.267 ++ fss["$mount"]=squashfs 00:04:49.267 ++ avails["$mount"]=0 00:04:49.267 ++ sizes["$mount"]=96337920 00:04:49.267 ++ uses["$mount"]=96337920 00:04:49.267 ++ read -r source fs size use avail _ mount 00:04:49.267 ++ mounts["$mount"]=/dev/loop2 00:04:49.267 ++ fss["$mount"]=squashfs 00:04:49.267 ++ avails["$mount"]=0 00:04:49.267 ++ sizes["$mount"]=52297728 00:04:49.267 ++ uses["$mount"]=52297728 00:04:49.267 ++ read -r source fs size use avail _ mount 00:04:49.267 ++ mounts["$mount"]=/dev/vda3 00:04:49.267 ++ fss["$mount"]=vfat 00:04:49.267 ++ avails["$mount"]=98705408 00:04:49.267 ++ sizes["$mount"]=109422592 00:04:49.267 ++ uses["$mount"]=10718208 00:04:49.267 ++ read -r source fs size use avail _ mount 00:04:49.267 ++ mounts["$mount"]=tmpfs 00:04:49.267 ++ fss["$mount"]=tmpfs 00:04:49.267 ++ avails["$mount"]=1255755776 00:04:49.267 ++ sizes["$mount"]=1255755776 00:04:49.267 ++ uses["$mount"]=0 00:04:49.267 ++ read -r source fs size use avail _ mount 00:04:49.267 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:04:49.267 ++ fss["$mount"]=fuse.sshfs 00:04:49.267 ++ avails["$mount"]=96675024896 00:04:49.267 ++ sizes["$mount"]=105088212992 00:04:49.267 ++ uses["$mount"]=3027755008 00:04:49.267 ++ read -r source fs size use avail _ mount 00:04:49.267 ++ printf '* Looking for test storage...\n' 00:04:49.267 * Looking for test storage... 00:04:49.267 ++ local target_space new_size 00:04:49.267 ++ for target_dir in "${storage_candidates[@]}" 00:04:49.267 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:04:49.267 +++ awk '$1 !~ /Filesystem/{print $6}' 00:04:49.267 ++ mount=/ 00:04:49.267 ++ target_space=11134148608 00:04:49.267 ++ (( target_space == 0 || target_space < requested_size )) 00:04:49.267 ++ (( target_space >= requested_size )) 00:04:49.267 ++ [[ ext4 == tmpfs ]] 00:04:49.268 ++ [[ ext4 == ramfs ]] 00:04:49.268 ++ [[ / == / ]] 00:04:49.268 ++ new_size=11680460800 00:04:49.268 ++ (( new_size * 100 / sizes[/] > 95 )) 00:04:49.268 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:49.268 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:04:49.268 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:04:49.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:04:49.268 ++ return 0 00:04:49.268 ++ set -o errtrace 00:04:49.268 ++ shopt -s extdebug 00:04:49.268 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:04:49.268 ++ PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:04:49.268 07:05:22 -- common/autotest_common.sh@1670 -- # true 00:04:49.268 07:05:22 -- common/autotest_common.sh@1672 -- # xtrace_fd 00:04:49.268 07:05:22 -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:04:49.268 07:05:22 -- common/autotest_common.sh@29 -- # exec 00:04:49.268 07:05:22 -- common/autotest_common.sh@31 -- # xtrace_restore 00:04:49.268 07:05:22 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:04:49.268 07:05:22 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:04:49.268 07:05:22 -- common/autotest_common.sh@18 -- # set -x 00:04:49.268 07:05:22 -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:04:49.268 07:05:22 -- unit/unittest.sh@151 -- # '[' 0 -eq 1 ']' 00:04:49.268 07:05:22 -- unit/unittest.sh@158 -- # '[' -z x ']' 00:04:49.268 07:05:22 -- unit/unittest.sh@165 -- # '[' 0 -eq 1 ']' 00:04:49.268 07:05:22 -- unit/unittest.sh@178 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:04:49.268 07:05:22 -- unit/unittest.sh@178 -- # CC_TYPE=CC_TYPE=gcc 00:04:49.268 07:05:22 -- unit/unittest.sh@179 -- # hash lcov 00:04:49.268 07:05:22 -- unit/unittest.sh@179 -- # grep -q '#define SPDK_CONFIG_COVERAGE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:49.268 07:05:22 -- unit/unittest.sh@179 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:49.268 07:05:22 -- unit/unittest.sh@180 -- # cov_avail=yes 00:04:49.268 07:05:22 -- unit/unittest.sh@184 -- # '[' yes = yes ']' 00:04:49.268 07:05:22 -- unit/unittest.sh@186 -- # [[ -z /home/vagrant/spdk_repo/spdk/../output ]] 00:04:49.268 07:05:22 -- unit/unittest.sh@189 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:49.268 07:05:22 -- unit/unittest.sh@191 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:04:49.268 07:05:22 -- unit/unittest.sh@199 -- # export 'LCOV_OPTS= 00:04:49.268 --rc lcov_branch_coverage=1 00:04:49.268 --rc lcov_function_coverage=1 00:04:49.268 --rc genhtml_branch_coverage=1 00:04:49.268 --rc genhtml_function_coverage=1 00:04:49.268 --rc genhtml_legend=1 00:04:49.268 --rc geninfo_all_blocks=1 00:04:49.268 ' 00:04:49.268 07:05:22 -- unit/unittest.sh@199 -- # LCOV_OPTS=' 00:04:49.268 --rc lcov_branch_coverage=1 00:04:49.268 --rc lcov_function_coverage=1 00:04:49.268 --rc genhtml_branch_coverage=1 00:04:49.268 --rc genhtml_function_coverage=1 00:04:49.268 --rc genhtml_legend=1 00:04:49.268 --rc geninfo_all_blocks=1 00:04:49.268 ' 00:04:49.268 07:05:22 -- unit/unittest.sh@200 -- # export 'LCOV=lcov 00:04:49.268 --rc lcov_branch_coverage=1 00:04:49.268 --rc lcov_function_coverage=1 00:04:49.268 --rc genhtml_branch_coverage=1 00:04:49.268 --rc genhtml_function_coverage=1 00:04:49.268 --rc genhtml_legend=1 00:04:49.268 --rc geninfo_all_blocks=1 00:04:49.268 --no-external' 00:04:49.268 07:05:22 -- unit/unittest.sh@200 -- # LCOV='lcov 00:04:49.268 --rc lcov_branch_coverage=1 00:04:49.268 --rc lcov_function_coverage=1 00:04:49.268 --rc genhtml_branch_coverage=1 00:04:49.268 --rc genhtml_function_coverage=1 00:04:49.268 --rc genhtml_legend=1 00:04:49.268 --rc geninfo_all_blocks=1 00:04:49.268 --no-external' 00:04:49.268 07:05:22 -- unit/unittest.sh@202 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:55.836 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:55.836 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:55.837 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:55.837 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:55.838 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:55.838 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:55.838 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:55.838 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:55.838 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:55.838 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:55.838 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:55.838 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:55.838 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:55.838 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:55.838 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:55.838 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:55.838 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:55.838 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:55.838 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:55.838 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:55.838 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:55.838 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:55.838 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:55.838 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:55.838 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:55.838 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:55.838 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:57.216 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:57.216 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:57.216 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:57.216 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:57.216 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:57.216 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:02.489 07:05:35 -- unit/unittest.sh@206 -- # uname -m 00:05:02.489 07:05:35 -- unit/unittest.sh@206 -- # '[' x86_64 = aarch64 ']' 00:05:02.489 07:05:35 -- unit/unittest.sh@210 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:02.489 07:05:35 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:02.489 07:05:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:02.489 07:05:35 -- common/autotest_common.sh@10 -- # set +x 00:05:02.489 ************************************ 00:05:02.489 START TEST unittest_pci_event 00:05:02.489 ************************************ 00:05:02.489 07:05:35 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:05:02.489 00:05:02.489 00:05:02.489 CUnit - A unit testing framework for C - Version 2.1-3 00:05:02.489 http://cunit.sourceforge.net/ 00:05:02.489 00:05:02.489 00:05:02.489 Suite: pci_event 00:05:02.489 Test: test_pci_parse_event ...[2024-02-13 07:05:35.491425] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000 00:05:02.489 [2024-02-13 07:05:35.491912] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000 00:05:02.489 passed 00:05:02.489 00:05:02.489 Run Summary: Type Total Ran Passed Failed Inactive 00:05:02.489 suites 1 1 n/a 0 0 00:05:02.489 tests 1 1 1 0 0 00:05:02.489 asserts 15 15 15 0 n/a 00:05:02.489 00:05:02.489 Elapsed time = 0.001 seconds 00:05:02.489 00:05:02.489 real 0m0.044s 00:05:02.489 user 0m0.023s 00:05:02.489 sys 0m0.013s 00:05:02.489 ************************************ 00:05:02.489 END TEST unittest_pci_event 00:05:02.489 ************************************ 00:05:02.489 07:05:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.489 07:05:35 -- common/autotest_common.sh@10 -- # set +x 00:05:02.489 07:05:35 -- unit/unittest.sh@211 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:02.489 07:05:35 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:02.489 07:05:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:02.489 07:05:35 -- common/autotest_common.sh@10 -- # set +x 00:05:02.489 ************************************ 00:05:02.489 START TEST unittest_include 00:05:02.489 ************************************ 00:05:02.489 07:05:35 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:05:02.489 00:05:02.489 00:05:02.489 CUnit - A unit testing framework for C - Version 2.1-3 00:05:02.489 http://cunit.sourceforge.net/ 00:05:02.489 00:05:02.489 00:05:02.489 Suite: histogram 00:05:02.489 Test: histogram_test ...passed 00:05:02.489 Test: histogram_merge ...passed 00:05:02.489 00:05:02.489 Run Summary: Type Total Ran Passed Failed Inactive 00:05:02.489 suites 1 1 n/a 0 0 00:05:02.489 tests 2 2 2 0 0 00:05:02.489 asserts 50 50 50 0 n/a 00:05:02.489 00:05:02.489 Elapsed time = 0.006 seconds 00:05:02.489 00:05:02.489 real 0m0.040s 00:05:02.489 user 0m0.024s 00:05:02.489 sys 0m0.016s 00:05:02.489 ************************************ 00:05:02.489 END TEST unittest_include 00:05:02.489 ************************************ 00:05:02.489 07:05:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.489 07:05:35 -- common/autotest_common.sh@10 -- # set +x 00:05:02.489 07:05:35 -- unit/unittest.sh@212 -- # run_test unittest_bdev unittest_bdev 00:05:02.489 07:05:35 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:02.489 07:05:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:02.489 07:05:35 -- common/autotest_common.sh@10 -- # set +x 00:05:02.489 ************************************ 00:05:02.489 START TEST unittest_bdev 00:05:02.489 ************************************ 00:05:02.489 07:05:35 -- common/autotest_common.sh@1102 -- # unittest_bdev 00:05:02.489 07:05:35 -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:05:02.489 00:05:02.489 00:05:02.489 CUnit - A unit testing framework for C - Version 2.1-3 00:05:02.489 http://cunit.sourceforge.net/ 00:05:02.489 00:05:02.489 00:05:02.489 Suite: bdev 00:05:02.489 Test: bytes_to_blocks_test ...passed 00:05:02.489 Test: num_blocks_test ...passed 00:05:02.489 Test: io_valid_test ...passed 00:05:02.489 Test: open_write_test ...[2024-02-13 07:05:35.755246] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:05:02.489 [2024-02-13 07:05:35.755586] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:05:02.489 [2024-02-13 07:05:35.755730] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:05:02.489 passed 00:05:02.489 Test: claim_test ...passed 00:05:02.489 Test: alias_add_del_test ...[2024-02-13 07:05:35.842700] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:05:02.489 [2024-02-13 07:05:35.842823] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4578:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:05:02.489 [2024-02-13 07:05:35.842874] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:05:02.489 passed 00:05:02.489 Test: get_device_stat_test ...passed 00:05:02.489 Test: bdev_io_types_test ...passed 00:05:02.489 Test: bdev_io_wait_test ...passed 00:05:02.489 Test: bdev_io_spans_split_test ...passed 00:05:02.489 Test: bdev_io_boundary_split_test ...passed 00:05:02.489 Test: bdev_io_max_size_and_segment_split_test ...[2024-02-13 07:05:36.016612] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3185:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:05:02.490 passed 00:05:02.490 Test: bdev_io_mix_split_test ...passed 00:05:02.490 Test: bdev_io_split_with_io_wait ...passed 00:05:02.490 Test: bdev_io_write_unit_split_test ...[2024-02-13 07:05:36.116604] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:02.490 [2024-02-13 07:05:36.116744] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:05:02.490 [2024-02-13 07:05:36.116778] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:05:02.490 [2024-02-13 07:05:36.116832] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2742:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:05:02.490 passed 00:05:02.490 Test: bdev_io_alignment_with_boundary ...passed 00:05:02.748 Test: bdev_io_alignment ...passed 00:05:02.748 Test: bdev_histograms ...passed 00:05:02.748 Test: bdev_write_zeroes ...passed 00:05:02.748 Test: bdev_compare_and_write ...passed 00:05:02.748 Test: bdev_compare ...passed 00:05:03.007 Test: bdev_compare_emulated ...passed 00:05:03.007 Test: bdev_zcopy_write ...passed 00:05:03.007 Test: bdev_zcopy_read ...passed 00:05:03.007 Test: bdev_open_while_hotremove ...passed 00:05:03.007 Test: bdev_close_while_hotremove ...passed 00:05:03.007 Test: bdev_open_ext_test ...passed 00:05:03.007 Test: bdev_open_ext_unregister ...[2024-02-13 07:05:36.585803] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8041:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:03.007 passed 00:05:03.007 Test: bdev_set_io_timeout ...[2024-02-13 07:05:36.586055] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8041:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:05:03.007 passed 00:05:03.007 Test: bdev_set_qd_sampling ...passed 00:05:03.007 Test: lba_range_overlap ...passed 00:05:03.266 Test: lock_lba_range_check_ranges ...passed 00:05:03.266 Test: lock_lba_range_with_io_outstanding ...passed 00:05:03.266 Test: lock_lba_range_overlapped ...passed 00:05:03.266 Test: bdev_quiesce ...[2024-02-13 07:05:36.799221] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9964:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:05:03.266 passed 00:05:03.266 Test: bdev_io_abort ...passed 00:05:03.266 Test: bdev_unmap ...passed 00:05:03.266 Test: bdev_write_zeroes_split_test ...passed 00:05:03.266 Test: bdev_set_options_test ...passed 00:05:03.266 Test: bdev_get_memory_domains ...passed 00:05:03.266 Test: bdev_io_ext ...[2024-02-13 07:05:36.901446] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 485:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:05:03.266 passed 00:05:03.525 Test: bdev_io_ext_no_opts ...passed 00:05:03.525 Test: bdev_io_ext_invalid_opts ...passed 00:05:03.525 Test: bdev_io_ext_split ...passed 00:05:03.525 Test: bdev_io_ext_bounce_buffer ...passed 00:05:03.525 Test: bdev_register_uuid_alias ...[2024-02-13 07:05:37.066322] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 6ceed24a-d719-477b-ba9d-82c4359c6927 already exists 00:05:03.525 [2024-02-13 07:05:37.066390] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:6ceed24a-d719-477b-ba9d-82c4359c6927 alias for bdev bdev0 00:05:03.525 passed 00:05:03.525 Test: bdev_unregister_by_name ...[2024-02-13 07:05:37.080464] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7831:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:05:03.525 passed 00:05:03.525 Test: for_each_bdev_test ...[2024-02-13 07:05:37.080538] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7839:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:05:03.525 passed 00:05:03.525 Test: bdev_seek_test ...passed 00:05:03.525 Test: bdev_copy ...passed 00:05:03.525 Test: bdev_copy_split_test ...passed 00:05:03.525 Test: examine_locks ...passed 00:05:03.525 Test: claim_v2_rwo ...[2024-02-13 07:05:37.164229] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.164278] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8565:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.164292] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.164338] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.164353] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.164403] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8560:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:05:03.525 passed 00:05:03.525 Test: claim_v2_rom ...[2024-02-13 07:05:37.164537] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.164582] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.164605] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.164625] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.164658] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8603:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:05:03.525 [2024-02-13 07:05:37.164687] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8598:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:03.525 passed 00:05:03.525 Test: claim_v2_rwm ...[2024-02-13 07:05:37.164790] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8633:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:03.525 [2024-02-13 07:05:37.164840] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7935:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.164873] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.164894] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.164907] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.164927] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8653:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.164957] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8633:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:05:03.525 passed 00:05:03.525 Test: claim_v2_existing_writer ...passed 00:05:03.525 Test: claim_v2_existing_v1 ...[2024-02-13 07:05:37.165099] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8598:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:03.525 [2024-02-13 07:05:37.165126] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8598:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:05:03.525 [2024-02-13 07:05:37.165226] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.165252] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.165266] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:05:03.525 passed 00:05:03.525 Test: claim_v1_existing_v2 ...passed 00:05:03.525 Test: examine_claimed ...[2024-02-13 07:05:37.165363] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.165406] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:05:03.525 [2024-02-13 07:05:37.165435] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8402:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:05:03.525 passed 00:05:03.525 00:05:03.525 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.526 suites 1 1 n/a 0 0 00:05:03.526 tests 59 59 59 0 0 00:05:03.526 asserts 4599 4599 4599 0 n/a 00:05:03.526 00:05:03.526 Elapsed time = 1.488 seconds 00:05:03.526 [2024-02-13 07:05:37.165697] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8730:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:05:03.526 07:05:37 -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:05:03.785 00:05:03.785 00:05:03.785 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.785 http://cunit.sourceforge.net/ 00:05:03.785 00:05:03.785 00:05:03.785 Suite: nvme 00:05:03.785 Test: test_create_ctrlr ...passed 00:05:03.785 Test: test_reset_ctrlr ...[2024-02-13 07:05:37.215063] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 passed 00:05:03.785 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:05:03.785 Test: test_failover_ctrlr ...passed 00:05:03.785 Test: test_race_between_failover_and_add_secondary_trid ...[2024-02-13 07:05:37.217593] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 [2024-02-13 07:05:37.217827] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 [2024-02-13 07:05:37.218046] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 passed 00:05:03.785 Test: test_pending_reset ...[2024-02-13 07:05:37.219493] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 [2024-02-13 07:05:37.219773] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 passed 00:05:03.785 Test: test_attach_ctrlr ...[2024-02-13 07:05:37.220919] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4183:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:05:03.785 passed 00:05:03.785 Test: test_aer_cb ...passed 00:05:03.785 Test: test_submit_nvme_cmd ...passed 00:05:03.785 Test: test_add_remove_trid ...passed 00:05:03.785 Test: test_abort ...[2024-02-13 07:05:37.224383] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7172:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:05:03.785 passed 00:05:03.785 Test: test_get_io_qpair ...passed 00:05:03.785 Test: test_bdev_unregister ...passed 00:05:03.785 Test: test_compare_ns ...passed 00:05:03.785 Test: test_init_ana_log_page ...passed 00:05:03.785 Test: test_get_memory_domains ...passed 00:05:03.785 Test: test_reconnect_qpair ...[2024-02-13 07:05:37.227083] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 passed 00:05:03.785 Test: test_create_bdev_ctrlr ...[2024-02-13 07:05:37.227585] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5220:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:05:03.785 passed 00:05:03.785 Test: test_add_multi_ns_to_bdev ...[2024-02-13 07:05:37.228828] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4439:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:05:03.785 passed 00:05:03.785 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:05:03.785 Test: test_admin_path ...passed 00:05:03.785 Test: test_reset_bdev_ctrlr ...passed 00:05:03.785 Test: test_find_io_path ...passed 00:05:03.785 Test: test_retry_io_if_ana_state_is_updating ...passed 00:05:03.785 Test: test_retry_io_for_io_path_error ...passed 00:05:03.785 Test: test_retry_io_count ...passed 00:05:03.785 Test: test_concurrent_read_ana_log_page ...passed 00:05:03.785 Test: test_retry_io_for_ana_error ...passed 00:05:03.785 Test: test_check_io_error_resiliency_params ...[2024-02-13 07:05:37.235681] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5877:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:05:03.785 [2024-02-13 07:05:37.235753] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5881:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:03.785 [2024-02-13 07:05:37.235778] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5890:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:05:03.785 [2024-02-13 07:05:37.235811] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5893:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:05:03.785 [2024-02-13 07:05:37.235832] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5905:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:03.785 [2024-02-13 07:05:37.235863] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5905:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:05:03.785 passed 00:05:03.785 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-02-13 07:05:37.235887] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5885:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:05:03.785 [2024-02-13 07:05:37.235929] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5900:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:05:03.785 [2024-02-13 07:05:37.235956] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5897:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:05:03.785 passed 00:05:03.785 Test: test_reconnect_ctrlr ...[2024-02-13 07:05:37.236741] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 [2024-02-13 07:05:37.236949] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 [2024-02-13 07:05:37.237184] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 [2024-02-13 07:05:37.237301] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 passed 00:05:03.785 Test: test_retry_failover_ctrlr ...[2024-02-13 07:05:37.237432] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 [2024-02-13 07:05:37.237826] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 passed 00:05:03.785 Test: test_fail_path ...[2024-02-13 07:05:37.238349] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 [2024-02-13 07:05:37.238484] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 [2024-02-13 07:05:37.238629] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 [2024-02-13 07:05:37.238720] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 [2024-02-13 07:05:37.238861] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 passed 00:05:03.785 Test: test_nvme_ns_cmp ...passed 00:05:03.785 Test: test_ana_transition ...passed 00:05:03.785 Test: test_set_preferred_path ...passed 00:05:03.785 Test: test_find_next_io_path ...passed 00:05:03.785 Test: test_find_io_path_min_qd ...passed 00:05:03.785 Test: test_disable_auto_failback ...[2024-02-13 07:05:37.240510] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 passed 00:05:03.785 Test: test_set_multipath_policy ...passed 00:05:03.785 Test: test_uuid_generation ...passed 00:05:03.785 Test: test_retry_io_to_same_path ...passed 00:05:03.785 Test: test_race_between_reset_and_disconnected ...passed 00:05:03.785 Test: test_ctrlr_op_rpc ...passed 00:05:03.785 Test: test_bdev_ctrlr_op_rpc ...passed 00:05:03.785 Test: test_disable_enable_ctrlr ...[2024-02-13 07:05:37.244205] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 [2024-02-13 07:05:37.244359] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:05:03.785 passed 00:05:03.785 Test: test_delete_ctrlr_done ...passed 00:05:03.785 00:05:03.786 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.786 suites 1 1 n/a 0 0 00:05:03.786 tests 47 47 47 0 0 00:05:03.786 asserts 3527 3527 3527 0 n/a 00:05:03.786 00:05:03.786 Elapsed time = 0.031 seconds 00:05:03.786 07:05:37 -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:05:03.786 Test Options 00:05:03.786 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:05:03.786 00:05:03.786 00:05:03.786 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.786 http://cunit.sourceforge.net/ 00:05:03.786 00:05:03.786 00:05:03.786 Suite: raid 00:05:03.786 Test: test_create_raid ...passed 00:05:03.786 Test: test_create_raid_superblock ...passed 00:05:03.786 Test: test_delete_raid ...passed 00:05:03.786 Test: test_create_raid_invalid_args ...passed 00:05:03.786 Test: test_delete_raid_invalid_args ...[2024-02-13 07:05:37.288736] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1357:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:05:03.786 [2024-02-13 07:05:37.289257] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1351:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:05:03.786 [2024-02-13 07:05:37.289764] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1341:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:05:03.786 [2024-02-13 07:05:37.290065] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:03.786 [2024-02-13 07:05:37.290951] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:2934:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:05:03.786 passed 00:05:03.786 Test: test_io_channel ...passed 00:05:03.786 Test: test_reset_io ...passed 00:05:03.786 Test: test_write_io ...passed 00:05:03.786 Test: test_read_io ...passed 00:05:04.722 Test: test_unmap_io ...passed 00:05:04.722 Test: test_io_failure ...[2024-02-13 07:05:38.143376] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c: 832:raid_bdev_submit_request: *ERROR*: submit request, invalid io type 0 00:05:04.722 passed 00:05:04.722 Test: test_multi_raid_no_io ...passed 00:05:04.722 Test: test_multi_raid_with_io ...passed 00:05:04.722 Test: test_io_type_supported ...passed 00:05:04.722 Test: test_raid_json_dump_info ...passed 00:05:04.722 Test: test_context_size ...passed 00:05:04.722 Test: test_raid_level_conversions ...passed 00:05:04.722 Test: test_raid_process ...passed 00:05:04.722 Test: test_raid_io_split ...passed 00:05:04.722 00:05:04.722 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.722 suites 1 1 n/a 0 0 00:05:04.722 tests 19 19 19 0 0 00:05:04.722 asserts 177879 177879 177879 0 n/a 00:05:04.722 00:05:04.722 Elapsed time = 0.868 seconds 00:05:04.722 07:05:38 -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:05:04.722 00:05:04.722 00:05:04.722 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.722 http://cunit.sourceforge.net/ 00:05:04.722 00:05:04.722 00:05:04.722 Suite: raid_sb 00:05:04.722 Test: test_raid_bdev_write_superblock ...passed 00:05:04.722 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:05:04.722 Test: test_raid_bdev_parse_superblock ...passed 00:05:04.722 00:05:04.722 [2024-02-13 07:05:38.192027] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 120:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:05:04.722 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.722 suites 1 1 n/a 0 0 00:05:04.722 tests 3 3 3 0 0 00:05:04.722 asserts 32 32 32 0 n/a 00:05:04.722 00:05:04.722 Elapsed time = 0.001 seconds 00:05:04.722 07:05:38 -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:05:04.722 00:05:04.722 00:05:04.722 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.722 http://cunit.sourceforge.net/ 00:05:04.722 00:05:04.722 00:05:04.722 Suite: concat 00:05:04.722 Test: test_concat_start ...passed 00:05:04.722 Test: test_concat_rw ...passed 00:05:04.722 Test: test_concat_null_payload ...passed 00:05:04.722 00:05:04.722 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.722 suites 1 1 n/a 0 0 00:05:04.722 tests 3 3 3 0 0 00:05:04.722 asserts 8097 8097 8097 0 n/a 00:05:04.722 00:05:04.723 Elapsed time = 0.005 seconds 00:05:04.723 07:05:38 -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:05:04.723 00:05:04.723 00:05:04.723 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.723 http://cunit.sourceforge.net/ 00:05:04.723 00:05:04.723 00:05:04.723 Suite: raid1 00:05:04.723 Test: test_raid1_start ...passed 00:05:04.723 Test: test_raid1_read_balancing ...passed 00:05:04.723 00:05:04.723 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.723 suites 1 1 n/a 0 0 00:05:04.723 tests 2 2 2 0 0 00:05:04.723 asserts 2856 2856 2856 0 n/a 00:05:04.723 00:05:04.723 Elapsed time = 0.004 seconds 00:05:04.723 07:05:38 -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:05:04.723 00:05:04.723 00:05:04.723 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.723 http://cunit.sourceforge.net/ 00:05:04.723 00:05:04.723 00:05:04.723 Suite: zone 00:05:04.723 Test: test_zone_get_operation ...passed 00:05:04.723 Test: test_bdev_zone_get_info ...passed 00:05:04.723 Test: test_bdev_zone_management ...passed 00:05:04.723 Test: test_bdev_zone_append ...passed 00:05:04.723 Test: test_bdev_zone_append_with_md ...passed 00:05:04.723 Test: test_bdev_zone_appendv ...passed 00:05:04.723 Test: test_bdev_zone_appendv_with_md ...passed 00:05:04.723 Test: test_bdev_io_get_append_location ...passed 00:05:04.723 00:05:04.723 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.723 suites 1 1 n/a 0 0 00:05:04.723 tests 8 8 8 0 0 00:05:04.723 asserts 94 94 94 0 n/a 00:05:04.723 00:05:04.723 Elapsed time = 0.000 seconds 00:05:04.723 07:05:38 -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:05:04.723 00:05:04.723 00:05:04.723 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.723 http://cunit.sourceforge.net/ 00:05:04.723 00:05:04.723 00:05:04.723 Suite: gpt_parse 00:05:04.723 Test: test_parse_mbr_and_primary ...[2024-02-13 07:05:38.330782] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:04.723 [2024-02-13 07:05:38.331032] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:04.723 [2024-02-13 07:05:38.331091] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:04.723 [2024-02-13 07:05:38.331158] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:04.723 [2024-02-13 07:05:38.331205] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:04.723 [2024-02-13 07:05:38.331273] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:04.723 passed 00:05:04.723 Test: test_parse_secondary ...[2024-02-13 07:05:38.332026] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:05:04.723 [2024-02-13 07:05:38.332068] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:05:04.723 [2024-02-13 07:05:38.332096] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:05:04.723 [2024-02-13 07:05:38.332123] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:05:04.723 passed 00:05:04.723 Test: test_check_mbr ...[2024-02-13 07:05:38.332872] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:04.723 passed 00:05:04.723 Test: test_read_header ...[2024-02-13 07:05:38.332910] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:05:04.723 [2024-02-13 07:05:38.332975] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:05:04.723 [2024-02-13 07:05:38.333054] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:05:04.723 [2024-02-13 07:05:38.333171] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:05:04.723 [2024-02-13 07:05:38.333215] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:05:04.723 [2024-02-13 07:05:38.333241] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:05:04.723 passed 00:05:04.723 Test: test_read_partitions ...[2024-02-13 07:05:38.333266] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:05:04.723 [2024-02-13 07:05:38.333317] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:05:04.723 [2024-02-13 07:05:38.333357] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:05:04.723 [2024-02-13 07:05:38.333384] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:05:04.723 [2024-02-13 07:05:38.333403] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:05:04.723 [2024-02-13 07:05:38.333779] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:05:04.723 passed 00:05:04.723 00:05:04.723 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.723 suites 1 1 n/a 0 0 00:05:04.723 tests 5 5 5 0 0 00:05:04.723 asserts 33 33 33 0 n/a 00:05:04.723 00:05:04.723 Elapsed time = 0.004 seconds 00:05:04.723 07:05:38 -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:05:04.723 00:05:04.723 00:05:04.723 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.723 http://cunit.sourceforge.net/ 00:05:04.723 00:05:04.723 00:05:04.723 Suite: bdev_part 00:05:04.723 Test: part_test ...[2024-02-13 07:05:38.368748] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4548:bdev_name_add: *ERROR*: Bdev name test1 already exists 00:05:04.723 passed 00:05:04.723 Test: part_free_test ...passed 00:05:04.983 Test: part_get_io_channel_test ...passed 00:05:04.983 Test: part_construct_ext ...passed 00:05:04.983 00:05:04.983 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.983 suites 1 1 n/a 0 0 00:05:04.983 tests 4 4 4 0 0 00:05:04.983 asserts 48 48 48 0 n/a 00:05:04.983 00:05:04.983 Elapsed time = 0.048 seconds 00:05:04.983 07:05:38 -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:05:04.983 00:05:04.983 00:05:04.983 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.983 http://cunit.sourceforge.net/ 00:05:04.983 00:05:04.983 00:05:04.983 Suite: scsi_nvme_suite 00:05:04.983 Test: scsi_nvme_translate_test ...passed 00:05:04.983 00:05:04.983 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.983 suites 1 1 n/a 0 0 00:05:04.983 tests 1 1 1 0 0 00:05:04.983 asserts 104 104 104 0 n/a 00:05:04.983 00:05:04.983 Elapsed time = 0.000 seconds 00:05:04.983 07:05:38 -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:05:04.983 00:05:04.983 00:05:04.983 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.983 http://cunit.sourceforge.net/ 00:05:04.983 00:05:04.983 00:05:04.983 Suite: lvol 00:05:04.983 Test: ut_lvs_init ...[2024-02-13 07:05:38.481617] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:05:04.983 passed 00:05:04.983 Test: ut_lvol_init ...passed 00:05:04.983 Test: ut_lvol_snapshot ...[2024-02-13 07:05:38.482073] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:05:04.983 passed 00:05:04.983 Test: ut_lvol_clone ...passed 00:05:04.983 Test: ut_lvs_destroy ...passed 00:05:04.983 Test: ut_lvs_unload ...passed 00:05:04.983 Test: ut_lvol_resize ...passed 00:05:04.983 Test: ut_lvol_set_read_only ...[2024-02-13 07:05:38.483501] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1391:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:05:04.983 passed 00:05:04.983 Test: ut_lvol_hotremove ...passed 00:05:04.983 Test: ut_vbdev_lvol_get_io_channel ...passed 00:05:04.983 Test: ut_vbdev_lvol_io_type_supported ...passed 00:05:04.983 Test: ut_lvol_read_write ...passed 00:05:04.983 Test: ut_vbdev_lvol_submit_request ...passed 00:05:04.983 Test: ut_lvol_examine_config ...passed 00:05:04.983 Test: ut_lvol_examine_disk ...[2024-02-13 07:05:38.484154] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1533:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:05:04.983 passed 00:05:04.983 Test: ut_lvol_rename ...[2024-02-13 07:05:38.485093] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:05:04.983 [2024-02-13 07:05:38.485184] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1341:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:05:04.983 passed 00:05:04.983 Test: ut_bdev_finish ...passed 00:05:04.983 Test: ut_lvs_rename ...passed 00:05:04.983 Test: ut_lvol_seek ...passed 00:05:04.983 Test: ut_esnap_dev_create ...[2024-02-13 07:05:38.485888] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1868:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:05:04.983 [2024-02-13 07:05:38.485964] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1874:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:05:04.983 [2024-02-13 07:05:38.485990] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:05:04.983 [2024-02-13 07:05:38.486030] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1900:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : unable to claim esnap bdev 'a27fd8fe-d4b9-431e-a044-271016228ce4': -1 00:05:04.983 passed 00:05:04.983 Test: ut_lvol_esnap_clone_bad_args ...passed 00:05:04.983 00:05:04.983 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.983 suites 1 1 n/a 0 0 00:05:04.983 tests 21 21 21 0 0 00:05:04.983 asserts 712 712 712 0 n/a 00:05:04.983 00:05:04.983 Elapsed time = 0.005 seconds 00:05:04.983 [2024-02-13 07:05:38.486166] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1277:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:05:04.983 [2024-02-13 07:05:38.486198] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1284:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9d1-aa17f37dd8db' could not be opened: error -19 00:05:04.983 07:05:38 -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:05:04.983 00:05:04.983 00:05:04.983 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.983 http://cunit.sourceforge.net/ 00:05:04.983 00:05:04.983 00:05:04.983 Suite: zone_block 00:05:04.983 Test: test_zone_block_create ...passed 00:05:04.983 Test: test_zone_block_create_invalid ...[2024-02-13 07:05:38.537235] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:05:04.983 [2024-02-13 07:05:38.537550] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-02-13 07:05:38.537722] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:05:04.983 [2024-02-13 07:05:38.537785] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-02-13 07:05:38.537979] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:05:04.983 [2024-02-13 07:05:38.538016] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:05:04.983 Test: test_get_zone_info ...[2024-02-13 07:05:38.538100] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:05:04.983 [2024-02-13 07:05:38.538151] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-02-13 07:05:38.538698] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.983 [2024-02-13 07:05:38.538777] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.983 [2024-02-13 07:05:38.538823] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.983 passed 00:05:04.983 Test: test_supported_io_types ...passed 00:05:04.983 Test: test_reset_zone ...[2024-02-13 07:05:38.539628] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.983 passed 00:05:04.983 Test: test_open_zone ...[2024-02-13 07:05:38.539688] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.983 [2024-02-13 07:05:38.540144] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.983 [2024-02-13 07:05:38.540854] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.983 passed 00:05:04.983 Test: test_zone_write ...[2024-02-13 07:05:38.540919] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.983 [2024-02-13 07:05:38.541413] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:04.983 [2024-02-13 07:05:38.541454] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.983 [2024-02-13 07:05:38.541507] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:04.983 [2024-02-13 07:05:38.541550] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.983 [2024-02-13 07:05:38.546985] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:05:04.983 [2024-02-13 07:05:38.547036] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.983 [2024-02-13 07:05:38.547117] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:05:04.983 [2024-02-13 07:05:38.547143] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.983 [2024-02-13 07:05:38.552562] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:04.983 passed 00:05:04.983 Test: test_zone_read ...[2024-02-13 07:05:38.552631] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.983 [2024-02-13 07:05:38.553082] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:05:04.983 [2024-02-13 07:05:38.553124] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.983 [2024-02-13 07:05:38.553199] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:05:04.983 [2024-02-13 07:05:38.553228] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.983 [2024-02-13 07:05:38.553671] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:05:04.983 [2024-02-13 07:05:38.553707] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.983 passed 00:05:04.983 Test: test_close_zone ...[2024-02-13 07:05:38.554105] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.983 [2024-02-13 07:05:38.554207] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.984 [2024-02-13 07:05:38.554451] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.984 [2024-02-13 07:05:38.554496] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.984 passed 00:05:04.984 Test: test_finish_zone ...[2024-02-13 07:05:38.555119] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.984 [2024-02-13 07:05:38.555186] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.984 passed 00:05:04.984 Test: test_append_zone ...[2024-02-13 07:05:38.555551] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:05:04.984 [2024-02-13 07:05:38.555591] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.984 [2024-02-13 07:05:38.555647] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:05:04.984 [2024-02-13 07:05:38.555672] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.984 [2024-02-13 07:05:38.566626] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:05:04.984 [2024-02-13 07:05:38.566685] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:05:04.984 passed 00:05:04.984 00:05:04.984 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.984 suites 1 1 n/a 0 0 00:05:04.984 tests 11 11 11 0 0 00:05:04.984 asserts 3437 3437 3437 0 n/a 00:05:04.984 00:05:04.984 Elapsed time = 0.031 seconds 00:05:04.984 07:05:38 -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:05:04.984 00:05:04.984 00:05:04.984 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.984 http://cunit.sourceforge.net/ 00:05:04.984 00:05:04.984 00:05:04.984 Suite: bdev 00:05:04.984 Test: basic ...[2024-02-13 07:05:38.665767] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55d539986e01): Operation not permitted (rc=-1) 00:05:04.984 [2024-02-13 07:05:38.666123] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x55d539986dc0): Operation not permitted (rc=-1) 00:05:04.984 [2024-02-13 07:05:38.666167] thread.c:2359:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x55d539986e01): Operation not permitted (rc=-1) 00:05:05.242 passed 00:05:05.242 Test: unregister_and_close ...passed 00:05:05.242 Test: unregister_and_close_different_threads ...passed 00:05:05.242 Test: basic_qos ...passed 00:05:05.242 Test: put_channel_during_reset ...passed 00:05:05.242 Test: aborted_reset ...passed 00:05:05.500 Test: aborted_reset_no_outstanding_io ...passed 00:05:05.500 Test: io_during_reset ...passed 00:05:05.500 Test: reset_completions ...passed 00:05:05.500 Test: io_during_qos_queue ...passed 00:05:05.500 Test: io_during_qos_reset ...passed 00:05:05.500 Test: enomem ...passed 00:05:05.759 Test: enomem_multi_bdev ...passed 00:05:05.759 Test: enomem_multi_bdev_unregister ...passed 00:05:05.759 Test: enomem_multi_io_target ...passed 00:05:05.759 Test: qos_dynamic_enable ...passed 00:05:05.759 Test: bdev_histograms_mt ...passed 00:05:05.759 Test: bdev_set_io_timeout_mt ...[2024-02-13 07:05:39.382385] thread.c: 465:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered 00:05:05.759 passed 00:05:05.759 Test: lock_lba_range_then_submit_io ...[2024-02-13 07:05:39.399556] thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x55d539986d80 already registered (old:0x6130000003c0 new:0x613000000c80) 00:05:05.759 passed 00:05:06.018 Test: unregister_during_reset ...passed 00:05:06.018 Test: event_notify_and_close ...passed 00:05:06.018 Suite: bdev_wrong_thread 00:05:06.018 Test: spdk_bdev_register_wt ...[2024-02-13 07:05:39.499004] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8359:spdk_bdev_register: *ERROR*: Cannot examine bdev wt_bdev on thread 0x618000000880 (0x618000000880) 00:05:06.018 passed 00:05:06.018 Test: spdk_bdev_examine_wt ...[2024-02-13 07:05:39.499344] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 793:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x618000000880 (0x618000000880) 00:05:06.018 passed 00:05:06.018 00:05:06.018 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.018 suites 2 2 n/a 0 0 00:05:06.018 tests 23 23 23 0 0 00:05:06.018 asserts 601 601 601 0 n/a 00:05:06.018 00:05:06.018 Elapsed time = 0.862 seconds 00:05:06.018 00:05:06.018 real 0m3.869s 00:05:06.018 user 0m1.780s 00:05:06.018 sys 0m2.093s 00:05:06.018 07:05:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:06.018 ************************************ 00:05:06.018 END TEST unittest_bdev 00:05:06.018 ************************************ 00:05:06.018 07:05:39 -- common/autotest_common.sh@10 -- # set +x 00:05:06.018 07:05:39 -- unit/unittest.sh@213 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:06.018 07:05:39 -- unit/unittest.sh@218 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:06.018 07:05:39 -- unit/unittest.sh@223 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:06.018 07:05:39 -- unit/unittest.sh@227 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:05:06.018 07:05:39 -- unit/unittest.sh@228 -- # run_test unittest_bdev_raid5f /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:06.018 07:05:39 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:05:06.018 07:05:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:05:06.018 07:05:39 -- common/autotest_common.sh@10 -- # set +x 00:05:06.018 ************************************ 00:05:06.018 START TEST unittest_bdev_raid5f 00:05:06.018 ************************************ 00:05:06.018 07:05:39 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid5f.c/raid5f_ut 00:05:06.018 00:05:06.018 00:05:06.018 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.018 http://cunit.sourceforge.net/ 00:05:06.018 00:05:06.018 00:05:06.018 Suite: raid5f 00:05:06.018 Test: test_raid5f_start ...passed 00:05:06.586 Test: test_raid5f_submit_read_request ...passed 00:05:06.586 Test: test_raid5f_stripe_request_map_iovecs ...passed 00:05:09.873 Test: test_raid5f_submit_full_stripe_write_request ...passed 00:05:27.961 Test: test_raid5f_chunk_write_error ...passed 00:05:33.231 Test: test_raid5f_chunk_write_error_with_enomem ...passed 00:05:36.518 Test: test_raid5f_submit_full_stripe_write_request_degraded ...passed 00:06:03.065 Test: test_raid5f_submit_read_request_degraded ...passed 00:06:03.065 00:06:03.065 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.065 suites 1 1 n/a 0 0 00:06:03.065 tests 8 8 8 0 0 00:06:03.065 asserts 351864 351864 351864 0 n/a 00:06:03.065 00:06:03.065 Elapsed time = 53.183 seconds 00:06:03.065 00:06:03.065 real 0m53.264s 00:06:03.065 user 0m50.317s 00:06:03.065 sys 0m2.936s 00:06:03.065 ************************************ 00:06:03.065 END TEST unittest_bdev_raid5f 00:06:03.065 ************************************ 00:06:03.065 07:06:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:03.065 07:06:32 -- common/autotest_common.sh@10 -- # set +x 00:06:03.065 07:06:32 -- unit/unittest.sh@231 -- # run_test unittest_blob_blobfs unittest_blob 00:06:03.065 07:06:32 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:03.065 07:06:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:03.065 07:06:32 -- common/autotest_common.sh@10 -- # set +x 00:06:03.065 ************************************ 00:06:03.065 START TEST unittest_blob_blobfs 00:06:03.065 ************************************ 00:06:03.065 07:06:32 -- common/autotest_common.sh@1102 -- # unittest_blob 00:06:03.065 07:06:32 -- unit/unittest.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:06:03.065 07:06:32 -- unit/unittest.sh@39 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:06:03.065 00:06:03.065 00:06:03.065 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.065 http://cunit.sourceforge.net/ 00:06:03.065 00:06:03.065 00:06:03.065 Suite: blob_nocopy_noextent 00:06:03.065 Test: blob_init ...[2024-02-13 07:06:32.950554] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:03.065 passed 00:06:03.065 Test: blob_thin_provision ...passed 00:06:03.065 Test: blob_read_only ...passed 00:06:03.065 Test: bs_load ...[2024-02-13 07:06:33.065782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:03.065 passed 00:06:03.065 Test: bs_load_custom_cluster_size ...passed 00:06:03.065 Test: bs_load_after_failed_grow ...passed 00:06:03.065 Test: bs_cluster_sz ...[2024-02-13 07:06:33.102380] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:03.065 [2024-02-13 07:06:33.102917] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:03.065 [2024-02-13 07:06:33.103105] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:03.065 passed 00:06:03.065 Test: bs_resize_md ...passed 00:06:03.065 Test: bs_destroy ...passed 00:06:03.065 Test: bs_type ...passed 00:06:03.065 Test: bs_super_block ...passed 00:06:03.065 Test: bs_test_recover_cluster_count ...passed 00:06:03.065 Test: bs_grow_live ...passed 00:06:03.065 Test: bs_grow_live_no_space ...passed 00:06:03.065 Test: bs_test_grow ...passed 00:06:03.065 Test: blob_serialize_test ...passed 00:06:03.065 Test: super_block_crc ...passed 00:06:03.065 Test: blob_thin_prov_write_count_io ...passed 00:06:03.065 Test: bs_load_iter_test ...passed 00:06:03.065 Test: blob_relations ...[2024-02-13 07:06:33.282546] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:03.065 [2024-02-13 07:06:33.282717] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.065 [2024-02-13 07:06:33.283695] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:03.065 [2024-02-13 07:06:33.283781] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.065 passed 00:06:03.066 Test: blob_relations2 ...[2024-02-13 07:06:33.299682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:03.066 [2024-02-13 07:06:33.299785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.066 [2024-02-13 07:06:33.299839] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:03.066 [2024-02-13 07:06:33.299859] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.066 [2024-02-13 07:06:33.301374] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:03.066 [2024-02-13 07:06:33.301485] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.066 [2024-02-13 07:06:33.301914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:03.066 [2024-02-13 07:06:33.301981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.066 passed 00:06:03.066 Test: blob_relations3 ...passed 00:06:03.066 Test: blobstore_clean_power_failure ...passed 00:06:03.066 Test: blob_delete_snapshot_power_failure ...[2024-02-13 07:06:33.471388] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:03.066 [2024-02-13 07:06:33.486228] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:03.066 [2024-02-13 07:06:33.486339] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:03.066 [2024-02-13 07:06:33.486410] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.066 [2024-02-13 07:06:33.500440] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:03.066 [2024-02-13 07:06:33.500531] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:03.066 [2024-02-13 07:06:33.500612] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:03.066 [2024-02-13 07:06:33.500646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.066 [2024-02-13 07:06:33.516268] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:03.066 [2024-02-13 07:06:33.516459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.066 [2024-02-13 07:06:33.531694] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:03.066 [2024-02-13 07:06:33.531838] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.066 [2024-02-13 07:06:33.545909] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:03.066 [2024-02-13 07:06:33.546050] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.066 passed 00:06:03.066 Test: blob_create_snapshot_power_failure ...[2024-02-13 07:06:33.589398] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:03.066 [2024-02-13 07:06:33.616995] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:03.066 [2024-02-13 07:06:33.631251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:03.066 passed 00:06:03.066 Test: blob_io_unit ...passed 00:06:03.066 Test: blob_io_unit_compatibility ...passed 00:06:03.066 Test: blob_ext_md_pages ...passed 00:06:03.066 Test: blob_esnap_io_4096_4096 ...passed 00:06:03.066 Test: blob_esnap_io_512_512 ...passed 00:06:03.066 Test: blob_esnap_io_4096_512 ...passed 00:06:03.066 Test: blob_esnap_io_512_4096 ...passed 00:06:03.066 Suite: blob_bs_nocopy_noextent 00:06:03.066 Test: blob_open ...passed 00:06:03.066 Test: blob_create ...[2024-02-13 07:06:33.901376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:03.066 passed 00:06:03.066 Test: blob_create_loop ...passed 00:06:03.066 Test: blob_create_fail ...[2024-02-13 07:06:34.009526] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:03.066 passed 00:06:03.066 Test: blob_create_internal ...passed 00:06:03.066 Test: blob_create_zero_extent ...passed 00:06:03.066 Test: blob_snapshot ...passed 00:06:03.066 Test: blob_clone ...passed 00:06:03.066 Test: blob_inflate ...[2024-02-13 07:06:34.226581] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:03.066 passed 00:06:03.066 Test: blob_delete ...passed 00:06:03.066 Test: blob_resize_test ...[2024-02-13 07:06:34.301796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:03.066 passed 00:06:03.066 Test: channel_ops ...passed 00:06:03.066 Test: blob_super ...passed 00:06:03.066 Test: blob_rw_verify_iov ...passed 00:06:03.066 Test: blob_unmap ...passed 00:06:03.066 Test: blob_iter ...passed 00:06:03.066 Test: blob_parse_md ...passed 00:06:03.066 Test: bs_load_pending_removal ...passed 00:06:03.066 Test: bs_unload ...[2024-02-13 07:06:34.586866] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:03.066 passed 00:06:03.066 Test: bs_usable_clusters ...passed 00:06:03.066 Test: blob_crc ...[2024-02-13 07:06:34.667729] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:03.066 [2024-02-13 07:06:34.667919] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:03.066 passed 00:06:03.066 Test: blob_flags ...passed 00:06:03.066 Test: bs_version ...passed 00:06:03.066 Test: blob_set_xattrs_test ...[2024-02-13 07:06:34.783819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:03.066 [2024-02-13 07:06:34.783956] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:03.066 passed 00:06:03.066 Test: blob_thin_prov_alloc ...passed 00:06:03.066 Test: blob_insert_cluster_msg_test ...passed 00:06:03.066 Test: blob_thin_prov_rw ...passed 00:06:03.066 Test: blob_thin_prov_rle ...passed 00:06:03.066 Test: blob_thin_prov_rw_iov ...passed 00:06:03.066 Test: blob_snapshot_rw ...passed 00:06:03.066 Test: blob_snapshot_rw_iov ...passed 00:06:03.066 Test: blob_inflate_rw ...passed 00:06:03.066 Test: blob_snapshot_freeze_io ...passed 00:06:03.066 Test: blob_operation_split_rw ...passed 00:06:03.066 Test: blob_operation_split_rw_iov ...passed 00:06:03.066 Test: blob_simultaneous_operations ...[2024-02-13 07:06:35.698276] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:03.066 [2024-02-13 07:06:35.698385] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.066 [2024-02-13 07:06:35.699544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:03.066 [2024-02-13 07:06:35.699604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.066 [2024-02-13 07:06:35.710463] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:03.066 [2024-02-13 07:06:35.710582] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.066 [2024-02-13 07:06:35.710738] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:03.066 [2024-02-13 07:06:35.710787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.066 passed 00:06:03.066 Test: blob_persist_test ...passed 00:06:03.066 Test: blob_decouple_snapshot ...passed 00:06:03.066 Test: blob_seek_io_unit ...passed 00:06:03.066 Test: blob_nested_freezes ...passed 00:06:03.066 Suite: blob_blob_nocopy_noextent 00:06:03.066 Test: blob_write ...passed 00:06:03.066 Test: blob_read ...passed 00:06:03.066 Test: blob_rw_verify ...passed 00:06:03.066 Test: blob_rw_verify_iov_nomem ...passed 00:06:03.066 Test: blob_rw_iov_read_only ...passed 00:06:03.066 Test: blob_xattr ...passed 00:06:03.066 Test: blob_dirty_shutdown ...passed 00:06:03.066 Test: blob_is_degraded ...passed 00:06:03.066 Suite: blob_esnap_bs_nocopy_noextent 00:06:03.066 Test: blob_esnap_create ...passed 00:06:03.066 Test: blob_esnap_thread_add_remove ...passed 00:06:03.066 Test: blob_esnap_clone_snapshot ...passed 00:06:03.066 Test: blob_esnap_clone_inflate ...passed 00:06:03.066 Test: blob_esnap_clone_decouple ...passed 00:06:03.066 Test: blob_esnap_clone_reload ...passed 00:06:03.066 Test: blob_esnap_hotplug ...passed 00:06:03.066 Suite: blob_nocopy_extent 00:06:03.066 Test: blob_init ...[2024-02-13 07:06:36.453233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:03.066 passed 00:06:03.066 Test: blob_thin_provision ...passed 00:06:03.066 Test: blob_read_only ...passed 00:06:03.066 Test: bs_load ...[2024-02-13 07:06:36.496677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:03.066 passed 00:06:03.066 Test: bs_load_custom_cluster_size ...passed 00:06:03.066 Test: bs_load_after_failed_grow ...passed 00:06:03.066 Test: bs_cluster_sz ...[2024-02-13 07:06:36.521407] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:03.066 [2024-02-13 07:06:36.521795] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:03.067 [2024-02-13 07:06:36.521852] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:03.067 passed 00:06:03.067 Test: bs_resize_md ...passed 00:06:03.067 Test: bs_destroy ...passed 00:06:03.067 Test: bs_type ...passed 00:06:03.067 Test: bs_super_block ...passed 00:06:03.067 Test: bs_test_recover_cluster_count ...passed 00:06:03.067 Test: bs_grow_live ...passed 00:06:03.067 Test: bs_grow_live_no_space ...passed 00:06:03.067 Test: bs_test_grow ...passed 00:06:03.067 Test: blob_serialize_test ...passed 00:06:03.067 Test: super_block_crc ...passed 00:06:03.067 Test: blob_thin_prov_write_count_io ...passed 00:06:03.067 Test: bs_load_iter_test ...passed 00:06:03.067 Test: blob_relations ...[2024-02-13 07:06:36.677704] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:03.067 [2024-02-13 07:06:36.677840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.067 [2024-02-13 07:06:36.678890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:03.067 [2024-02-13 07:06:36.678982] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.067 passed 00:06:03.067 Test: blob_relations2 ...[2024-02-13 07:06:36.694200] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:03.067 [2024-02-13 07:06:36.694306] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.067 [2024-02-13 07:06:36.694339] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:03.067 [2024-02-13 07:06:36.694370] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.067 [2024-02-13 07:06:36.695922] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:03.067 [2024-02-13 07:06:36.696011] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.067 [2024-02-13 07:06:36.696459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:03.067 [2024-02-13 07:06:36.696516] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.067 passed 00:06:03.067 Test: blob_relations3 ...passed 00:06:03.327 Test: blobstore_clean_power_failure ...passed 00:06:03.327 Test: blob_delete_snapshot_power_failure ...[2024-02-13 07:06:36.845048] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:03.327 [2024-02-13 07:06:36.856899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:03.327 [2024-02-13 07:06:36.868796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:03.327 [2024-02-13 07:06:36.868903] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:03.327 [2024-02-13 07:06:36.868950] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.327 [2024-02-13 07:06:36.880780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:03.327 [2024-02-13 07:06:36.880870] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:03.327 [2024-02-13 07:06:36.880920] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:03.327 [2024-02-13 07:06:36.880955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.327 [2024-02-13 07:06:36.892701] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:03.327 [2024-02-13 07:06:36.892793] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:03.327 [2024-02-13 07:06:36.892838] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:03.327 [2024-02-13 07:06:36.892881] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.327 [2024-02-13 07:06:36.904746] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:03.327 [2024-02-13 07:06:36.904875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.327 [2024-02-13 07:06:36.916726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:03.327 [2024-02-13 07:06:36.916860] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.327 [2024-02-13 07:06:36.928737] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:03.327 [2024-02-13 07:06:36.928856] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:03.327 passed 00:06:03.327 Test: blob_create_snapshot_power_failure ...[2024-02-13 07:06:36.964120] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:03.327 [2024-02-13 07:06:36.976575] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:03.327 [2024-02-13 07:06:37.000289] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:03.327 [2024-02-13 07:06:37.012616] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:03.586 passed 00:06:03.586 Test: blob_io_unit ...passed 00:06:03.586 Test: blob_io_unit_compatibility ...passed 00:06:03.586 Test: blob_ext_md_pages ...passed 00:06:03.586 Test: blob_esnap_io_4096_4096 ...passed 00:06:03.586 Test: blob_esnap_io_512_512 ...passed 00:06:03.586 Test: blob_esnap_io_4096_512 ...passed 00:06:03.586 Test: blob_esnap_io_512_4096 ...passed 00:06:03.586 Suite: blob_bs_nocopy_extent 00:06:03.586 Test: blob_open ...passed 00:06:03.586 Test: blob_create ...[2024-02-13 07:06:37.253207] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:03.586 passed 00:06:03.845 Test: blob_create_loop ...passed 00:06:03.845 Test: blob_create_fail ...[2024-02-13 07:06:37.367197] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:03.845 passed 00:06:03.845 Test: blob_create_internal ...passed 00:06:03.845 Test: blob_create_zero_extent ...passed 00:06:03.845 Test: blob_snapshot ...passed 00:06:03.845 Test: blob_clone ...passed 00:06:04.104 Test: blob_inflate ...[2024-02-13 07:06:37.565445] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:04.104 passed 00:06:04.105 Test: blob_delete ...passed 00:06:04.105 Test: blob_resize_test ...[2024-02-13 07:06:37.628845] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:04.105 passed 00:06:04.105 Test: channel_ops ...passed 00:06:04.105 Test: blob_super ...passed 00:06:04.105 Test: blob_rw_verify_iov ...passed 00:06:04.105 Test: blob_unmap ...passed 00:06:04.364 Test: blob_iter ...passed 00:06:04.364 Test: blob_parse_md ...passed 00:06:04.364 Test: bs_load_pending_removal ...passed 00:06:04.364 Test: bs_unload ...[2024-02-13 07:06:37.916355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:04.364 passed 00:06:04.364 Test: bs_usable_clusters ...passed 00:06:04.364 Test: blob_crc ...[2024-02-13 07:06:37.980293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:04.364 [2024-02-13 07:06:37.980425] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:04.364 passed 00:06:04.364 Test: blob_flags ...passed 00:06:04.623 Test: bs_version ...passed 00:06:04.623 Test: blob_set_xattrs_test ...[2024-02-13 07:06:38.074829] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:04.623 [2024-02-13 07:06:38.074959] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:04.623 passed 00:06:04.623 Test: blob_thin_prov_alloc ...passed 00:06:04.623 Test: blob_insert_cluster_msg_test ...passed 00:06:04.623 Test: blob_thin_prov_rw ...passed 00:06:04.624 Test: blob_thin_prov_rle ...passed 00:06:04.883 Test: blob_thin_prov_rw_iov ...passed 00:06:04.883 Test: blob_snapshot_rw ...passed 00:06:04.883 Test: blob_snapshot_rw_iov ...passed 00:06:05.141 Test: blob_inflate_rw ...passed 00:06:05.141 Test: blob_snapshot_freeze_io ...passed 00:06:05.141 Test: blob_operation_split_rw ...passed 00:06:05.400 Test: blob_operation_split_rw_iov ...passed 00:06:05.400 Test: blob_simultaneous_operations ...[2024-02-13 07:06:38.965997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:05.400 [2024-02-13 07:06:38.966115] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.400 [2024-02-13 07:06:38.967290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:05.400 [2024-02-13 07:06:38.967351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.400 [2024-02-13 07:06:38.977419] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:05.400 [2024-02-13 07:06:38.977513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.400 [2024-02-13 07:06:38.977641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:05.400 [2024-02-13 07:06:38.977667] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:05.400 passed 00:06:05.400 Test: blob_persist_test ...passed 00:06:05.659 Test: blob_decouple_snapshot ...passed 00:06:05.659 Test: blob_seek_io_unit ...passed 00:06:05.659 Test: blob_nested_freezes ...passed 00:06:05.659 Suite: blob_blob_nocopy_extent 00:06:05.659 Test: blob_write ...passed 00:06:05.659 Test: blob_read ...passed 00:06:05.659 Test: blob_rw_verify ...passed 00:06:05.659 Test: blob_rw_verify_iov_nomem ...passed 00:06:05.918 Test: blob_rw_iov_read_only ...passed 00:06:05.918 Test: blob_xattr ...passed 00:06:05.918 Test: blob_dirty_shutdown ...passed 00:06:05.918 Test: blob_is_degraded ...passed 00:06:05.918 Suite: blob_esnap_bs_nocopy_extent 00:06:05.918 Test: blob_esnap_create ...passed 00:06:05.918 Test: blob_esnap_thread_add_remove ...passed 00:06:05.918 Test: blob_esnap_clone_snapshot ...passed 00:06:05.918 Test: blob_esnap_clone_inflate ...passed 00:06:06.180 Test: blob_esnap_clone_decouple ...passed 00:06:06.180 Test: blob_esnap_clone_reload ...passed 00:06:06.180 Test: blob_esnap_hotplug ...passed 00:06:06.180 Suite: blob_copy_noextent 00:06:06.180 Test: blob_init ...[2024-02-13 07:06:39.687344] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:06.180 passed 00:06:06.180 Test: blob_thin_provision ...passed 00:06:06.180 Test: blob_read_only ...passed 00:06:06.180 Test: bs_load ...[2024-02-13 07:06:39.733623] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:06.180 passed 00:06:06.180 Test: bs_load_custom_cluster_size ...passed 00:06:06.180 Test: bs_load_after_failed_grow ...passed 00:06:06.180 Test: bs_cluster_sz ...[2024-02-13 07:06:39.760155] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:06.180 [2024-02-13 07:06:39.760351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:06.180 [2024-02-13 07:06:39.760446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:06.180 passed 00:06:06.180 Test: bs_resize_md ...passed 00:06:06.180 Test: bs_destroy ...passed 00:06:06.180 Test: bs_type ...passed 00:06:06.180 Test: bs_super_block ...passed 00:06:06.180 Test: bs_test_recover_cluster_count ...passed 00:06:06.180 Test: bs_grow_live ...passed 00:06:06.180 Test: bs_grow_live_no_space ...passed 00:06:06.180 Test: bs_test_grow ...passed 00:06:06.180 Test: blob_serialize_test ...passed 00:06:06.439 Test: super_block_crc ...passed 00:06:06.439 Test: blob_thin_prov_write_count_io ...passed 00:06:06.439 Test: bs_load_iter_test ...passed 00:06:06.439 Test: blob_relations ...[2024-02-13 07:06:39.912904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:06.439 [2024-02-13 07:06:39.913043] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.439 [2024-02-13 07:06:39.913702] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:06.439 [2024-02-13 07:06:39.913759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.439 passed 00:06:06.439 Test: blob_relations2 ...[2024-02-13 07:06:39.926823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:06.439 [2024-02-13 07:06:39.926904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.439 [2024-02-13 07:06:39.926944] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:06.439 [2024-02-13 07:06:39.926958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.439 [2024-02-13 07:06:39.927836] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:06.439 [2024-02-13 07:06:39.927900] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.439 [2024-02-13 07:06:39.928203] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:06.439 [2024-02-13 07:06:39.928235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.439 passed 00:06:06.439 Test: blob_relations3 ...passed 00:06:06.439 Test: blobstore_clean_power_failure ...passed 00:06:06.439 Test: blob_delete_snapshot_power_failure ...[2024-02-13 07:06:40.075648] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:06.439 [2024-02-13 07:06:40.087910] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:06.439 [2024-02-13 07:06:40.088015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:06.439 [2024-02-13 07:06:40.088057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.439 [2024-02-13 07:06:40.100044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:06.439 [2024-02-13 07:06:40.100131] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:06.440 [2024-02-13 07:06:40.100177] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:06.440 [2024-02-13 07:06:40.100198] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.440 [2024-02-13 07:06:40.113462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:06.440 [2024-02-13 07:06:40.113601] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.440 [2024-02-13 07:06:40.126217] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:06.440 [2024-02-13 07:06:40.126349] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.699 [2024-02-13 07:06:40.139056] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:06.699 [2024-02-13 07:06:40.139179] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:06.699 passed 00:06:06.699 Test: blob_create_snapshot_power_failure ...[2024-02-13 07:06:40.173472] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:06.699 [2024-02-13 07:06:40.198430] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:06:06.699 [2024-02-13 07:06:40.211732] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:06.699 passed 00:06:06.699 Test: blob_io_unit ...passed 00:06:06.699 Test: blob_io_unit_compatibility ...passed 00:06:06.699 Test: blob_ext_md_pages ...passed 00:06:06.699 Test: blob_esnap_io_4096_4096 ...passed 00:06:06.699 Test: blob_esnap_io_512_512 ...passed 00:06:06.699 Test: blob_esnap_io_4096_512 ...passed 00:06:06.958 Test: blob_esnap_io_512_4096 ...passed 00:06:06.958 Suite: blob_bs_copy_noextent 00:06:06.958 Test: blob_open ...passed 00:06:06.958 Test: blob_create ...[2024-02-13 07:06:40.454469] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:06.958 passed 00:06:06.958 Test: blob_create_loop ...passed 00:06:06.958 Test: blob_create_fail ...[2024-02-13 07:06:40.543602] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:06.958 passed 00:06:06.958 Test: blob_create_internal ...passed 00:06:06.958 Test: blob_create_zero_extent ...passed 00:06:07.217 Test: blob_snapshot ...passed 00:06:07.217 Test: blob_clone ...passed 00:06:07.217 Test: blob_inflate ...[2024-02-13 07:06:40.722275] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:07.217 passed 00:06:07.217 Test: blob_delete ...passed 00:06:07.217 Test: blob_resize_test ...[2024-02-13 07:06:40.787562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:07.217 passed 00:06:07.217 Test: channel_ops ...passed 00:06:07.217 Test: blob_super ...passed 00:06:07.217 Test: blob_rw_verify_iov ...passed 00:06:07.475 Test: blob_unmap ...passed 00:06:07.475 Test: blob_iter ...passed 00:06:07.475 Test: blob_parse_md ...passed 00:06:07.475 Test: bs_load_pending_removal ...passed 00:06:07.475 Test: bs_unload ...[2024-02-13 07:06:41.046031] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:07.475 passed 00:06:07.475 Test: bs_usable_clusters ...passed 00:06:07.475 Test: blob_crc ...[2024-02-13 07:06:41.110785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:07.475 [2024-02-13 07:06:41.111163] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:07.475 passed 00:06:07.475 Test: blob_flags ...passed 00:06:07.734 Test: bs_version ...passed 00:06:07.734 Test: blob_set_xattrs_test ...[2024-02-13 07:06:41.207921] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:07.734 [2024-02-13 07:06:41.208282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:07.734 passed 00:06:07.734 Test: blob_thin_prov_alloc ...passed 00:06:07.734 Test: blob_insert_cluster_msg_test ...passed 00:06:07.993 Test: blob_thin_prov_rw ...passed 00:06:07.993 Test: blob_thin_prov_rle ...passed 00:06:07.993 Test: blob_thin_prov_rw_iov ...passed 00:06:07.993 Test: blob_snapshot_rw ...passed 00:06:07.993 Test: blob_snapshot_rw_iov ...passed 00:06:08.251 Test: blob_inflate_rw ...passed 00:06:08.251 Test: blob_snapshot_freeze_io ...passed 00:06:08.510 Test: blob_operation_split_rw ...passed 00:06:08.510 Test: blob_operation_split_rw_iov ...passed 00:06:08.510 Test: blob_simultaneous_operations ...[2024-02-13 07:06:42.111776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:08.510 [2024-02-13 07:06:42.112157] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:08.510 [2024-02-13 07:06:42.112663] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:08.510 [2024-02-13 07:06:42.112823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:08.510 [2024-02-13 07:06:42.115699] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:08.510 [2024-02-13 07:06:42.115870] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:08.510 [2024-02-13 07:06:42.116005] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:08.510 [2024-02-13 07:06:42.116278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:08.510 passed 00:06:08.510 Test: blob_persist_test ...passed 00:06:08.810 Test: blob_decouple_snapshot ...passed 00:06:08.810 Test: blob_seek_io_unit ...passed 00:06:08.810 Test: blob_nested_freezes ...passed 00:06:08.810 Suite: blob_blob_copy_noextent 00:06:08.810 Test: blob_write ...passed 00:06:08.810 Test: blob_read ...passed 00:06:08.810 Test: blob_rw_verify ...passed 00:06:08.810 Test: blob_rw_verify_iov_nomem ...passed 00:06:08.810 Test: blob_rw_iov_read_only ...passed 00:06:08.810 Test: blob_xattr ...passed 00:06:09.069 Test: blob_dirty_shutdown ...passed 00:06:09.069 Test: blob_is_degraded ...passed 00:06:09.069 Suite: blob_esnap_bs_copy_noextent 00:06:09.069 Test: blob_esnap_create ...passed 00:06:09.069 Test: blob_esnap_thread_add_remove ...passed 00:06:09.069 Test: blob_esnap_clone_snapshot ...passed 00:06:09.069 Test: blob_esnap_clone_inflate ...passed 00:06:09.069 Test: blob_esnap_clone_decouple ...passed 00:06:09.069 Test: blob_esnap_clone_reload ...passed 00:06:09.329 Test: blob_esnap_hotplug ...passed 00:06:09.329 Suite: blob_copy_extent 00:06:09.329 Test: blob_init ...[2024-02-13 07:06:42.784282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5267:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:06:09.329 passed 00:06:09.329 Test: blob_thin_provision ...passed 00:06:09.329 Test: blob_read_only ...passed 00:06:09.329 Test: bs_load ...[2024-02-13 07:06:42.831967] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 896:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:06:09.329 passed 00:06:09.329 Test: bs_load_custom_cluster_size ...passed 00:06:09.329 Test: bs_load_after_failed_grow ...passed 00:06:09.329 Test: bs_cluster_sz ...[2024-02-13 07:06:42.857653] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3603:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:06:09.329 [2024-02-13 07:06:42.857886] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5398:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:06:09.329 [2024-02-13 07:06:42.858029] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3662:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:06:09.329 passed 00:06:09.329 Test: bs_resize_md ...passed 00:06:09.329 Test: bs_destroy ...passed 00:06:09.329 Test: bs_type ...passed 00:06:09.329 Test: bs_super_block ...passed 00:06:09.329 Test: bs_test_recover_cluster_count ...passed 00:06:09.329 Test: bs_grow_live ...passed 00:06:09.329 Test: bs_grow_live_no_space ...passed 00:06:09.329 Test: bs_test_grow ...passed 00:06:09.329 Test: blob_serialize_test ...passed 00:06:09.329 Test: super_block_crc ...passed 00:06:09.329 Test: blob_thin_prov_write_count_io ...passed 00:06:09.329 Test: bs_load_iter_test ...passed 00:06:09.329 Test: blob_relations ...[2024-02-13 07:06:43.009061] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:09.329 [2024-02-13 07:06:43.009574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.329 [2024-02-13 07:06:43.011962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:09.329 [2024-02-13 07:06:43.012299] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.588 passed 00:06:09.588 Test: blob_relations2 ...[2024-02-13 07:06:43.037637] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:09.588 [2024-02-13 07:06:43.037946] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.588 [2024-02-13 07:06:43.038049] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:09.588 [2024-02-13 07:06:43.038315] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.588 [2024-02-13 07:06:43.040090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:09.588 [2024-02-13 07:06:43.040288] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.588 [2024-02-13 07:06:43.040885] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7507:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:06:09.588 [2024-02-13 07:06:43.041082] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.588 passed 00:06:09.588 Test: blob_relations3 ...passed 00:06:09.588 Test: blobstore_clean_power_failure ...passed 00:06:09.588 Test: blob_delete_snapshot_power_failure ...[2024-02-13 07:06:43.265339] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:09.847 [2024-02-13 07:06:43.287140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:09.847 [2024-02-13 07:06:43.309150] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:09.847 [2024-02-13 07:06:43.309553] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:09.847 [2024-02-13 07:06:43.309623] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.847 [2024-02-13 07:06:43.336296] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:09.847 [2024-02-13 07:06:43.336735] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:09.847 [2024-02-13 07:06:43.336800] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:09.847 [2024-02-13 07:06:43.336923] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.847 [2024-02-13 07:06:43.359673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:09.847 [2024-02-13 07:06:43.360063] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1397:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:06:09.847 [2024-02-13 07:06:43.360134] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7421:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:06:09.847 [2024-02-13 07:06:43.360241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.847 [2024-02-13 07:06:43.380227] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:06:09.847 [2024-02-13 07:06:43.380598] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.847 [2024-02-13 07:06:43.399576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7223:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:06:09.847 [2024-02-13 07:06:43.399969] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.847 [2024-02-13 07:06:43.420713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7167:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:06:09.847 [2024-02-13 07:06:43.421140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:09.847 passed 00:06:09.847 Test: blob_create_snapshot_power_failure ...[2024-02-13 07:06:43.479890] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:06:09.847 [2024-02-13 07:06:43.500656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1510:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:06:10.106 [2024-02-13 07:06:43.541208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1600:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:06:10.106 [2024-02-13 07:06:43.560088] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6215:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:06:10.106 passed 00:06:10.106 Test: blob_io_unit ...passed 00:06:10.106 Test: blob_io_unit_compatibility ...passed 00:06:10.106 Test: blob_ext_md_pages ...passed 00:06:10.106 Test: blob_esnap_io_4096_4096 ...passed 00:06:10.106 Test: blob_esnap_io_512_512 ...passed 00:06:10.365 Test: blob_esnap_io_4096_512 ...passed 00:06:10.365 Test: blob_esnap_io_512_4096 ...passed 00:06:10.365 Suite: blob_bs_copy_extent 00:06:10.365 Test: blob_open ...passed 00:06:10.365 Test: blob_create ...[2024-02-13 07:06:43.965939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:06:10.365 passed 00:06:10.624 Test: blob_create_loop ...passed 00:06:10.624 Test: blob_create_fail ...[2024-02-13 07:06:44.098421] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:10.624 passed 00:06:10.624 Test: blob_create_internal ...passed 00:06:10.624 Test: blob_create_zero_extent ...passed 00:06:10.624 Test: blob_snapshot ...passed 00:06:10.883 Test: blob_clone ...passed 00:06:10.883 Test: blob_inflate ...[2024-02-13 07:06:44.363310] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6873:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:06:10.883 passed 00:06:10.883 Test: blob_delete ...passed 00:06:10.883 Test: blob_resize_test ...[2024-02-13 07:06:44.465514] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6972:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:06:10.883 passed 00:06:10.883 Test: channel_ops ...passed 00:06:10.883 Test: blob_super ...passed 00:06:11.142 Test: blob_rw_verify_iov ...passed 00:06:11.142 Test: blob_unmap ...passed 00:06:11.142 Test: blob_iter ...passed 00:06:11.142 Test: blob_parse_md ...passed 00:06:11.142 Test: bs_load_pending_removal ...passed 00:06:11.142 Test: bs_unload ...[2024-02-13 07:06:44.749672] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5655:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:06:11.142 passed 00:06:11.142 Test: bs_usable_clusters ...passed 00:06:11.142 Test: blob_crc ...[2024-02-13 07:06:44.820359] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:11.142 [2024-02-13 07:06:44.820698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1609:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:06:11.401 passed 00:06:11.401 Test: blob_flags ...passed 00:06:11.401 Test: bs_version ...passed 00:06:11.401 Test: blob_set_xattrs_test ...[2024-02-13 07:06:44.922754] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:11.401 [2024-02-13 07:06:44.923125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6096:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:06:11.401 passed 00:06:11.401 Test: blob_thin_prov_alloc ...passed 00:06:11.401 Test: blob_insert_cluster_msg_test ...passed 00:06:11.659 Test: blob_thin_prov_rw ...passed 00:06:11.659 Test: blob_thin_prov_rle ...passed 00:06:11.659 Test: blob_thin_prov_rw_iov ...passed 00:06:11.659 Test: blob_snapshot_rw ...passed 00:06:11.659 Test: blob_snapshot_rw_iov ...passed 00:06:11.918 Test: blob_inflate_rw ...passed 00:06:11.918 Test: blob_snapshot_freeze_io ...passed 00:06:12.177 Test: blob_operation_split_rw ...passed 00:06:12.177 Test: blob_operation_split_rw_iov ...passed 00:06:12.177 Test: blob_simultaneous_operations ...[2024-02-13 07:06:45.800613] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:12.177 [2024-02-13 07:06:45.800958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:12.177 [2024-02-13 07:06:45.801501] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:12.177 [2024-02-13 07:06:45.801661] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:12.177 [2024-02-13 07:06:45.804093] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:12.177 [2024-02-13 07:06:45.804280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:12.177 [2024-02-13 07:06:45.804409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7534:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:06:12.177 [2024-02-13 07:06:45.804660] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7474:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:06:12.177 passed 00:06:12.177 Test: blob_persist_test ...passed 00:06:12.435 Test: blob_decouple_snapshot ...passed 00:06:12.435 Test: blob_seek_io_unit ...passed 00:06:12.435 Test: blob_nested_freezes ...passed 00:06:12.435 Suite: blob_blob_copy_extent 00:06:12.435 Test: blob_write ...passed 00:06:12.435 Test: blob_read ...passed 00:06:12.435 Test: blob_rw_verify ...passed 00:06:12.435 Test: blob_rw_verify_iov_nomem ...passed 00:06:12.694 Test: blob_rw_iov_read_only ...passed 00:06:12.694 Test: blob_xattr ...passed 00:06:12.694 Test: blob_dirty_shutdown ...passed 00:06:12.694 Test: blob_is_degraded ...passed 00:06:12.694 Suite: blob_esnap_bs_copy_extent 00:06:12.694 Test: blob_esnap_create ...passed 00:06:12.694 Test: blob_esnap_thread_add_remove ...passed 00:06:12.694 Test: blob_esnap_clone_snapshot ...passed 00:06:12.694 Test: blob_esnap_clone_inflate ...passed 00:06:12.952 Test: blob_esnap_clone_decouple ...passed 00:06:12.952 Test: blob_esnap_clone_reload ...passed 00:06:12.952 Test: blob_esnap_hotplug ...passed 00:06:12.952 00:06:12.952 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.952 suites 16 16 n/a 0 0 00:06:12.952 tests 348 348 348 0 0 00:06:12.952 asserts 92605 92605 92605 0 n/a 00:06:12.952 00:06:12.952 Elapsed time = 13.485 seconds 00:06:12.952 07:06:46 -- unit/unittest.sh@41 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:06:12.952 00:06:12.952 00:06:12.952 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.952 http://cunit.sourceforge.net/ 00:06:12.952 00:06:12.952 00:06:12.952 Suite: blob_bdev 00:06:12.952 Test: create_bs_dev ...passed 00:06:12.952 Test: create_bs_dev_ro ...[2024-02-13 07:06:46.578361] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 507:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:06:12.952 passed 00:06:12.952 Test: create_bs_dev_rw ...passed 00:06:12.952 Test: claim_bs_dev ...[2024-02-13 07:06:46.579370] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:06:12.952 passed 00:06:12.952 Test: claim_bs_dev_ro ...passed 00:06:12.952 Test: deferred_destroy_refs ...passed 00:06:12.952 Test: deferred_destroy_channels ...passed 00:06:12.952 Test: deferred_destroy_threads ...passed 00:06:12.952 00:06:12.952 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.952 suites 1 1 n/a 0 0 00:06:12.952 tests 8 8 8 0 0 00:06:12.952 asserts 119 119 119 0 n/a 00:06:12.952 00:06:12.952 Elapsed time = 0.001 seconds 00:06:12.952 07:06:46 -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:06:12.952 00:06:12.952 00:06:12.952 CUnit - A unit testing framework for C - Version 2.1-3 00:06:12.952 http://cunit.sourceforge.net/ 00:06:12.952 00:06:12.952 00:06:12.952 Suite: tree 00:06:12.952 Test: blobfs_tree_op_test ...passed 00:06:12.952 00:06:12.952 Run Summary: Type Total Ran Passed Failed Inactive 00:06:12.952 suites 1 1 n/a 0 0 00:06:12.953 tests 1 1 1 0 0 00:06:12.953 asserts 27 27 27 0 n/a 00:06:12.953 00:06:12.953 Elapsed time = 0.000 seconds 00:06:12.953 07:06:46 -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:06:13.212 00:06:13.212 00:06:13.212 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.212 http://cunit.sourceforge.net/ 00:06:13.212 00:06:13.212 00:06:13.212 Suite: blobfs_async_ut 00:06:13.212 Test: fs_init ...passed 00:06:13.212 Test: fs_open ...passed 00:06:13.212 Test: fs_create ...passed 00:06:13.212 Test: fs_truncate ...passed 00:06:13.212 Test: fs_rename ...[2024-02-13 07:06:46.781657] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:06:13.212 passed 00:06:13.212 Test: fs_rw_async ...passed 00:06:13.212 Test: fs_writev_readv_async ...passed 00:06:13.212 Test: tree_find_buffer_ut ...passed 00:06:13.212 Test: channel_ops ...passed 00:06:13.212 Test: channel_ops_sync ...passed 00:06:13.212 00:06:13.212 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.212 suites 1 1 n/a 0 0 00:06:13.212 tests 10 10 10 0 0 00:06:13.212 asserts 292 292 292 0 n/a 00:06:13.212 00:06:13.212 Elapsed time = 0.192 seconds 00:06:13.212 07:06:46 -- unit/unittest.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:06:13.471 00:06:13.471 00:06:13.471 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.471 http://cunit.sourceforge.net/ 00:06:13.471 00:06:13.471 00:06:13.471 Suite: blobfs_sync_ut 00:06:13.471 Test: cache_read_after_write ...[2024-02-13 07:06:46.981640] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:06:13.471 passed 00:06:13.471 Test: file_length ...passed 00:06:13.471 Test: append_write_to_extend_blob ...passed 00:06:13.471 Test: partial_buffer ...passed 00:06:13.471 Test: cache_write_null_buffer ...passed 00:06:13.471 Test: fs_create_sync ...passed 00:06:13.471 Test: fs_rename_sync ...passed 00:06:13.471 Test: cache_append_no_cache ...passed 00:06:13.471 Test: fs_delete_file_without_close ...passed 00:06:13.471 00:06:13.471 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.471 suites 1 1 n/a 0 0 00:06:13.471 tests 9 9 9 0 0 00:06:13.471 asserts 345 345 345 0 n/a 00:06:13.471 00:06:13.471 Elapsed time = 0.385 seconds 00:06:13.730 07:06:47 -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:06:13.730 00:06:13.730 00:06:13.730 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.730 http://cunit.sourceforge.net/ 00:06:13.730 00:06:13.730 00:06:13.730 Suite: blobfs_bdev_ut 00:06:13.730 Test: spdk_blobfs_bdev_detect_test ...[2024-02-13 07:06:47.182080] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:13.730 passed 00:06:13.730 Test: spdk_blobfs_bdev_create_test ...[2024-02-13 07:06:47.182973] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:06:13.730 passed 00:06:13.730 Test: spdk_blobfs_bdev_mount_test ...passed 00:06:13.730 00:06:13.730 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.730 suites 1 1 n/a 0 0 00:06:13.730 tests 3 3 3 0 0 00:06:13.730 asserts 9 9 9 0 n/a 00:06:13.730 00:06:13.730 Elapsed time = 0.001 seconds 00:06:13.730 ************************************ 00:06:13.730 END TEST unittest_blob_blobfs 00:06:13.730 ************************************ 00:06:13.730 00:06:13.730 real 0m14.280s 00:06:13.730 user 0m13.678s 00:06:13.730 sys 0m0.759s 00:06:13.730 07:06:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.730 07:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:13.730 07:06:47 -- unit/unittest.sh@232 -- # run_test unittest_event unittest_event 00:06:13.730 07:06:47 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:13.730 07:06:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:13.730 07:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:13.730 ************************************ 00:06:13.730 START TEST unittest_event 00:06:13.730 ************************************ 00:06:13.730 07:06:47 -- common/autotest_common.sh@1102 -- # unittest_event 00:06:13.730 07:06:47 -- unit/unittest.sh@50 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:06:13.730 00:06:13.730 00:06:13.730 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.730 http://cunit.sourceforge.net/ 00:06:13.730 00:06:13.730 00:06:13.730 Suite: app_suite 00:06:13.730 Test: test_spdk_app_parse_args ...app_ut [options] 00:06:13.730 options: 00:06:13.730 -c, --config JSON config file (default none) 00:06:13.730 --json JSON config file (default none) 00:06:13.730 --json-ignore-init-errors 00:06:13.730 don't exit on invalid config entry 00:06:13.730 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:13.730 -g, --single-file-segments 00:06:13.730 force creating just one hugetlbfs file 00:06:13.730 -h, --help show this usage 00:06:13.730 -i, --shm-id shared memory ID (optional) 00:06:13.730 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:13.730 --lcores lcore to CPU mapping list. The list is in the format: 00:06:13.730 [<,lcores[@CPUs]>...] 00:06:13.730 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:13.730 Within the group, '-' is used for range separator, 00:06:13.730 ',' is used for single number separator. 00:06:13.730 '( )' can be omitted for single element group, 00:06:13.730 '@' can be omitted if cpus and lcores have the same value 00:06:13.730 -n, --mem-channels channel number of memory channels used for DPDK 00:06:13.730 -p, --main-core main (primary) core for DPDK 00:06:13.730 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:13.730 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:13.730 --disable-cpumask-locks Disable CPU core lock files. 00:06:13.730 --silence-noticelog disable notice level logging to stderr 00:06:13.730 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:13.730 -u, --no-pci disable PCI access 00:06:13.730 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:13.730 --max-delay maximum reactor delay (in microseconds) 00:06:13.730 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:13.730 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:13.730 -R, --huge-unlink unlink huge files after initialization 00:06:13.730 -v, --version print SPDK version 00:06:13.730 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:13.730 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:13.730 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:13.730 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:13.730 Tracepoints vary in size and can use more than one trace entry. 00:06:13.730 --rpcs-allowed comma-separated list of permitted RPCS 00:06:13.730 --env-context Opaque context for use of the env implementation 00:06:13.730 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:13.730 --no-huge run without using hugepages 00:06:13.730 -L, --logflag enable log flag (all, json_util, rpc, thread, trace) 00:06:13.730 -e, --tpoint-group [:] 00:06:13.730 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:13.730 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:13.730 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:13.730 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:13.730 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:13.730 app_ut [options] 00:06:13.730 options: 00:06:13.730 -c, --config JSON config file (default none) 00:06:13.730 --json JSON config file (default none) 00:06:13.730 --json-ignore-init-errors 00:06:13.730 don't exit on invalid config entry 00:06:13.730 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:13.730 -g, --single-file-segments 00:06:13.730 force creating just one hugetlbfs file 00:06:13.730 -h, --help show this usage 00:06:13.730 -i, --shm-id shared memory ID (optional) 00:06:13.730 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:13.730 --lcores lcore to CPU mapping list. The list is in the format: 00:06:13.730 [<,lcores[@CPUs]>...] 00:06:13.730 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:13.730 Within the group, '-' is used for range separator, 00:06:13.730 ',' is used for single number separator. 00:06:13.730 '( )' can be omitted for single element group, 00:06:13.730 '@' can be omitted if cpus and lcores have the same value 00:06:13.730 -n, --mem-channels channel number of memory channels used for DPDK 00:06:13.730 -p, --main-core main (primary) core for DPDK 00:06:13.730 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:13.730 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:13.730 --disable-cpumask-locks Disable CPU core lock files. 00:06:13.730 --silence-noticelog disable notice level logging to stderr 00:06:13.730 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:13.730 -u, --no-pci disable PCI access 00:06:13.730 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:13.730 --max-delay maximum reactor delay (in microseconds) 00:06:13.730 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:13.730 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:13.730 -R, --huge-unlink unlink huge files after initialization 00:06:13.730 -v, --version print SPDK version 00:06:13.730 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:13.730 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:13.730 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:13.730 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:13.730 Tracepoints vary in size and can use more than one trace entry. 00:06:13.730 --rpcs-allowed comma-separated list of permitted RPCS 00:06:13.730 --env-context Opaque context for use of the env implementation 00:06:13.730 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:13.730 --no-huge run without using hugepages 00:06:13.730 -L, --logflag enable log flag (all, json_util, rpc, thread, trace) 00:06:13.730 -e, --tpoint-group [:] 00:06:13.730 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:13.730 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:13.730 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:13.730 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:13.730 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:13.730 app_ut: invalid option -- 'z' 00:06:13.730 app_ut: unrecognized option '--test-long-opt' 00:06:13.730 [2024-02-13 07:06:47.264895] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1028:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:06:13.730 app_ut [options] 00:06:13.730 options: 00:06:13.730 -c, --config JSON config file (default none) 00:06:13.730 --json JSON config file (default none) 00:06:13.730 --json-ignore-init-errors 00:06:13.730 don't exit on invalid config entry 00:06:13.731 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:13.731 -g, --single-file-segments 00:06:13.731 force creating just one hugetlbfs file 00:06:13.731 -h, --help show this usage 00:06:13.731 -i, --shm-id shared memory ID (optional) 00:06:13.731 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:06:13.731 --lcores lcore to CPU mapping list. The list is in the format: 00:06:13.731 [<,lcores[@CPUs]>...] 00:06:13.731 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:13.731 Within the group, '-' is used for range separator, 00:06:13.731 ',' is used for single number separator. 00:06:13.731 '( )' can be omitted for single element group, 00:06:13.731 '@' can be omitted if cpus and lcores have the same value 00:06:13.731 -n, --mem-channels channel number of memory channels used for DPDK 00:06:13.731 -p, --main-core main (primary) core for DPDK 00:06:13.731 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:13.731 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:13.731 --disable-cpumask-locks Disable CPU core lock files. 00:06:13.731 --silence-noticelog disable notice level logging to stderr 00:06:13.731 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:13.731 -u, --no-pci disable PCI access 00:06:13.731 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:13.731 --max-delay maximum reactor delay (in microseconds) 00:06:13.731 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:13.731 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:13.731 -R, --huge-unlink unlink huge files after initialization 00:06:13.731 -v, --version print SPDK version 00:06:13.731 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:13.731 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:13.731 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:13.731 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:06:13.731 Tracepoints vary in size and can use more than one trace entry. 00:06:13.731 --rpcs-allowed comma-separated list of permitted RPCS 00:06:13.731 --env-context Opaque context for use of the env implementation 00:06:13.731 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:13.731 --no-huge run without using hugepages 00:06:13.731 -L, --logflag enable log flag (all, json_util, rpc, thread, trace) 00:06:13.731 -e, --tpoint-group [:] 00:06:13.731 group_name - tracepoint group name for spdk trace buffers (thread, all) 00:06:13.731 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:06:13.731 Groups and masks can be combined (e.g. thread,bdev:0x1). 00:06:13.731 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:06:13.731 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:06:13.731 passed 00:06:13.731 00:06:13.731 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.731 suites 1 1 n/a 0 0 00:06:13.731 tests 1 1 1 0 0 00:06:13.731 asserts 8 8 8 0 n/a 00:06:13.731 00:06:13.731 Elapsed time = 0.001 seconds 00:06:13.731 [2024-02-13 07:06:47.265329] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1209:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:06:13.731 [2024-02-13 07:06:47.265576] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1114:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:06:13.731 07:06:47 -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:06:13.731 00:06:13.731 00:06:13.731 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.731 http://cunit.sourceforge.net/ 00:06:13.731 00:06:13.731 00:06:13.731 Suite: app_suite 00:06:13.731 Test: test_create_reactor ...passed 00:06:13.731 Test: test_init_reactors ...passed 00:06:13.731 Test: test_event_call ...passed 00:06:13.731 Test: test_schedule_thread ...passed 00:06:13.731 Test: test_reschedule_thread ...passed 00:06:13.731 Test: test_bind_thread ...passed 00:06:13.731 Test: test_for_each_reactor ...passed 00:06:13.731 Test: test_reactor_stats ...passed 00:06:13.731 Test: test_scheduler ...passed 00:06:13.731 Test: test_governor ...passed 00:06:13.731 00:06:13.731 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.731 suites 1 1 n/a 0 0 00:06:13.731 tests 10 10 10 0 0 00:06:13.731 asserts 344 344 344 0 n/a 00:06:13.731 00:06:13.731 Elapsed time = 0.023 seconds 00:06:13.731 00:06:13.731 real 0m0.097s 00:06:13.731 user 0m0.052s 00:06:13.731 sys 0m0.045s 00:06:13.731 ************************************ 00:06:13.731 END TEST unittest_event 00:06:13.731 ************************************ 00:06:13.731 07:06:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:13.731 07:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:13.731 07:06:47 -- unit/unittest.sh@233 -- # uname -s 00:06:13.731 07:06:47 -- unit/unittest.sh@233 -- # '[' Linux = Linux ']' 00:06:13.731 07:06:47 -- unit/unittest.sh@234 -- # run_test unittest_ftl unittest_ftl 00:06:13.731 07:06:47 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:13.731 07:06:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:13.731 07:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:13.731 ************************************ 00:06:13.731 START TEST unittest_ftl 00:06:13.731 ************************************ 00:06:13.731 07:06:47 -- common/autotest_common.sh@1102 -- # unittest_ftl 00:06:13.731 07:06:47 -- unit/unittest.sh@55 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut 00:06:13.731 00:06:13.731 00:06:13.731 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.731 http://cunit.sourceforge.net/ 00:06:13.731 00:06:13.731 00:06:13.731 Suite: ftl_band_suite 00:06:13.990 Test: test_band_block_offset_from_addr_base ...passed 00:06:13.990 Test: test_band_block_offset_from_addr_offset ...passed 00:06:13.990 Test: test_band_addr_from_block_offset ...passed 00:06:13.990 Test: test_band_set_addr ...passed 00:06:13.990 Test: test_invalidate_addr ...passed 00:06:13.990 Test: test_next_xfer_addr ...passed 00:06:13.990 00:06:13.990 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.990 suites 1 1 n/a 0 0 00:06:13.990 tests 6 6 6 0 0 00:06:13.990 asserts 30356 30356 30356 0 n/a 00:06:13.990 00:06:13.990 Elapsed time = 0.179 seconds 00:06:13.990 07:06:47 -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut 00:06:13.990 00:06:13.990 00:06:13.990 CUnit - A unit testing framework for C - Version 2.1-3 00:06:13.990 http://cunit.sourceforge.net/ 00:06:13.990 00:06:13.990 00:06:13.990 Suite: ftl_bitmap 00:06:13.990 Test: test_ftl_bitmap_create ...passed 00:06:13.990 Test: test_ftl_bitmap_get ...[2024-02-13 07:06:47.670947] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes 00:06:13.990 [2024-02-13 07:06:47.671245] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c: 58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes 00:06:13.990 passed 00:06:13.990 Test: test_ftl_bitmap_set ...passed 00:06:13.990 Test: test_ftl_bitmap_clear ...passed 00:06:13.990 Test: test_ftl_bitmap_find_first_set ...passed 00:06:13.990 Test: test_ftl_bitmap_find_first_clear ...passed 00:06:13.990 Test: test_ftl_bitmap_count_set ...passed 00:06:13.990 00:06:13.990 Run Summary: Type Total Ran Passed Failed Inactive 00:06:13.990 suites 1 1 n/a 0 0 00:06:13.990 tests 7 7 7 0 0 00:06:13.990 asserts 137 137 137 0 n/a 00:06:13.990 00:06:13.990 Elapsed time = 0.001 seconds 00:06:14.249 07:06:47 -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut 00:06:14.249 00:06:14.249 00:06:14.249 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.249 http://cunit.sourceforge.net/ 00:06:14.249 00:06:14.249 00:06:14.249 Suite: ftl_io_suite 00:06:14.249 Test: test_completion ...passed 00:06:14.249 Test: test_multiple_ios ...passed 00:06:14.249 00:06:14.249 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.249 suites 1 1 n/a 0 0 00:06:14.249 tests 2 2 2 0 0 00:06:14.249 asserts 47 47 47 0 n/a 00:06:14.249 00:06:14.249 Elapsed time = 0.003 seconds 00:06:14.249 07:06:47 -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut 00:06:14.249 00:06:14.249 00:06:14.250 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.250 http://cunit.sourceforge.net/ 00:06:14.250 00:06:14.250 00:06:14.250 Suite: ftl_mngt 00:06:14.250 Test: test_next_step ...passed 00:06:14.250 Test: test_continue_step ...passed 00:06:14.250 Test: test_get_func_and_step_cntx_alloc ...passed 00:06:14.250 Test: test_fail_step ...passed 00:06:14.250 Test: test_mngt_call_and_call_rollback ...passed 00:06:14.250 Test: test_nested_process_failure ...passed 00:06:14.250 00:06:14.250 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.250 suites 1 1 n/a 0 0 00:06:14.250 tests 6 6 6 0 0 00:06:14.250 asserts 176 176 176 0 n/a 00:06:14.250 00:06:14.250 Elapsed time = 0.001 seconds 00:06:14.250 07:06:47 -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut 00:06:14.250 00:06:14.250 00:06:14.250 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.250 http://cunit.sourceforge.net/ 00:06:14.250 00:06:14.250 00:06:14.250 Suite: ftl_mempool 00:06:14.250 Test: test_ftl_mempool_create ...passed 00:06:14.250 Test: test_ftl_mempool_get_put ...passed 00:06:14.250 00:06:14.250 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.250 suites 1 1 n/a 0 0 00:06:14.250 tests 2 2 2 0 0 00:06:14.250 asserts 36 36 36 0 n/a 00:06:14.250 00:06:14.250 Elapsed time = 0.000 seconds 00:06:14.250 07:06:47 -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut 00:06:14.250 00:06:14.250 00:06:14.250 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.250 http://cunit.sourceforge.net/ 00:06:14.250 00:06:14.250 00:06:14.250 Suite: ftl_addr64_suite 00:06:14.250 Test: test_addr_cached ...passed 00:06:14.250 00:06:14.250 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.250 suites 1 1 n/a 0 0 00:06:14.250 tests 1 1 1 0 0 00:06:14.250 asserts 1536 1536 1536 0 n/a 00:06:14.250 00:06:14.250 Elapsed time = 0.000 seconds 00:06:14.250 07:06:47 -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut 00:06:14.250 00:06:14.250 00:06:14.250 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.250 http://cunit.sourceforge.net/ 00:06:14.250 00:06:14.250 00:06:14.250 Suite: ftl_sb 00:06:14.250 Test: test_sb_crc_v2 ...passed 00:06:14.250 Test: test_sb_crc_v3 ...passed 00:06:14.250 Test: test_sb_v3_md_layout ...[2024-02-13 07:06:47.822928] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions 00:06:14.250 [2024-02-13 07:06:47.823222] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:14.250 [2024-02-13 07:06:47.823269] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:14.250 [2024-02-13 07:06:47.823300] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow 00:06:14.250 [2024-02-13 07:06:47.823325] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:14.250 [2024-02-13 07:06:47.823402] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found 00:06:14.250 [2024-02-13 07:06:47.823428] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:14.250 [2024-02-13 07:06:47.823475] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found 00:06:14.250 [2024-02-13 07:06:47.823569] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found 00:06:14.250 passed 00:06:14.250 Test: test_sb_v5_md_layout ...[2024-02-13 07:06:47.823614] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:14.250 [2024-02-13 07:06:47.823639] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found 00:06:14.250 passed 00:06:14.250 00:06:14.250 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.250 suites 1 1 n/a 0 0 00:06:14.250 tests 4 4 4 0 0 00:06:14.250 asserts 148 148 148 0 n/a 00:06:14.250 00:06:14.250 Elapsed time = 0.002 seconds 00:06:14.250 07:06:47 -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut 00:06:14.250 00:06:14.250 00:06:14.250 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.250 http://cunit.sourceforge.net/ 00:06:14.250 00:06:14.250 00:06:14.250 Suite: ftl_layout_upgrade 00:06:14.250 Test: test_l2p_upgrade ...passed 00:06:14.250 00:06:14.250 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.250 suites 1 1 n/a 0 0 00:06:14.250 tests 1 1 1 0 0 00:06:14.250 asserts 140 140 140 0 n/a 00:06:14.250 00:06:14.250 Elapsed time = 0.001 seconds 00:06:14.250 00:06:14.250 real 0m0.479s 00:06:14.250 user 0m0.240s 00:06:14.250 sys 0m0.242s 00:06:14.250 ************************************ 00:06:14.250 END TEST unittest_ftl 00:06:14.250 ************************************ 00:06:14.250 07:06:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.250 07:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:14.250 07:06:47 -- unit/unittest.sh@237 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:14.250 07:06:47 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:14.250 07:06:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:14.250 07:06:47 -- common/autotest_common.sh@10 -- # set +x 00:06:14.250 ************************************ 00:06:14.250 START TEST unittest_accel 00:06:14.250 ************************************ 00:06:14.250 07:06:47 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:06:14.510 00:06:14.510 00:06:14.510 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.510 http://cunit.sourceforge.net/ 00:06:14.510 00:06:14.510 00:06:14.510 Suite: accel_sequence 00:06:14.510 Test: test_sequence_fill_copy ...passed 00:06:14.510 Test: test_sequence_abort ...passed 00:06:14.510 Test: test_sequence_append_error ...passed 00:06:14.510 Test: test_sequence_completion_error ...[2024-02-13 07:06:47.957240] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fb7df1ee7c0 00:06:14.510 passed 00:06:14.510 Test: test_sequence_decompress ...[2024-02-13 07:06:47.957644] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1926:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7fb7df1ee7c0 00:06:14.510 [2024-02-13 07:06:47.957694] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7fb7df1ee7c0 00:06:14.510 [2024-02-13 07:06:47.957736] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1836:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7fb7df1ee7c0 00:06:14.510 passed 00:06:14.510 Test: test_sequence_reverse ...passed 00:06:14.510 Test: test_sequence_copy_elision ...passed 00:06:14.510 Test: test_sequence_accel_buffers ...passed 00:06:14.510 Test: test_sequence_memory_domain ...[2024-02-13 07:06:47.970705] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1728:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:06:14.510 passed 00:06:14.510 Test: test_sequence_module_memory_domain ...[2024-02-13 07:06:47.970894] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1767:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98 00:06:14.510 passed 00:06:14.510 Test: test_sequence_crypto ...passed 00:06:14.510 Test: test_sequence_driver ...[2024-02-13 07:06:47.978354] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1875:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7fb7de5c67c0 using driver: ut 00:06:14.510 [2024-02-13 07:06:47.978462] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1939:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fb7de5c67c0 through driver: ut 00:06:14.510 passed 00:06:14.510 Test: test_sequence_same_iovs ...passed 00:06:14.510 Test: test_sequence_crc32 ...passed 00:06:14.510 Suite: accel 00:06:14.510 Test: test_spdk_accel_task_complete ...passed 00:06:14.510 Test: test_get_task ...passed 00:06:14.510 Test: test_spdk_accel_submit_copy ...passed 00:06:14.510 Test: test_spdk_accel_submit_dualcast ...passed 00:06:14.510 Test: test_spdk_accel_submit_compare ...passed 00:06:14.510 Test: test_spdk_accel_submit_fill ...passed 00:06:14.510 Test: test_spdk_accel_submit_crc32c ...passed 00:06:14.510 Test: test_spdk_accel_submit_crc32cv ...passed 00:06:14.510 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:06:14.510 Test: test_spdk_accel_submit_xor ...passed 00:06:14.510 Test: test_spdk_accel_module_find_by_name ...passed 00:06:14.510 Test: test_spdk_accel_module_register ...[2024-02-13 07:06:47.984067] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:14.510 [2024-02-13 07:06:47.984127] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 432:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:06:14.510 passed 00:06:14.510 00:06:14.510 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.510 suites 2 2 n/a 0 0 00:06:14.510 tests 26 26 26 0 0 00:06:14.510 asserts 831 831 831 0 n/a 00:06:14.510 00:06:14.510 Elapsed time = 0.040 seconds 00:06:14.510 00:06:14.510 real 0m0.081s 00:06:14.510 user 0m0.041s 00:06:14.510 sys 0m0.041s 00:06:14.510 ************************************ 00:06:14.510 END TEST unittest_accel 00:06:14.510 ************************************ 00:06:14.510 07:06:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.510 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:14.510 07:06:48 -- unit/unittest.sh@238 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:14.510 07:06:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:14.510 07:06:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:14.510 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:14.510 ************************************ 00:06:14.510 START TEST unittest_ioat 00:06:14.510 ************************************ 00:06:14.510 07:06:48 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:06:14.510 00:06:14.510 00:06:14.510 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.510 http://cunit.sourceforge.net/ 00:06:14.510 00:06:14.510 00:06:14.510 Suite: ioat 00:06:14.510 Test: ioat_state_check ...passed 00:06:14.510 00:06:14.510 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.510 suites 1 1 n/a 0 0 00:06:14.510 tests 1 1 1 0 0 00:06:14.510 asserts 32 32 32 0 n/a 00:06:14.510 00:06:14.510 Elapsed time = 0.000 seconds 00:06:14.510 00:06:14.510 real 0m0.031s 00:06:14.510 user 0m0.017s 00:06:14.510 sys 0m0.014s 00:06:14.510 ************************************ 00:06:14.510 END TEST unittest_ioat 00:06:14.510 ************************************ 00:06:14.510 07:06:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.510 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:14.510 07:06:48 -- unit/unittest.sh@239 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:14.510 07:06:48 -- unit/unittest.sh@240 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:14.510 07:06:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:14.510 07:06:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:14.510 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:14.510 ************************************ 00:06:14.510 START TEST unittest_idxd_user 00:06:14.510 ************************************ 00:06:14.510 07:06:48 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:06:14.510 00:06:14.510 00:06:14.511 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.511 http://cunit.sourceforge.net/ 00:06:14.511 00:06:14.511 00:06:14.511 Suite: idxd_user 00:06:14.511 Test: test_idxd_wait_cmd ...[2024-02-13 07:06:48.156104] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:14.511 passed 00:06:14.511 Test: test_idxd_reset_dev ...[2024-02-13 07:06:48.156411] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:06:14.511 passed 00:06:14.511 Test: test_idxd_group_config ...passed 00:06:14.511 Test: test_idxd_wq_config ...passed 00:06:14.511 00:06:14.511 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.511 suites 1 1 n/a 0 0 00:06:14.511 tests 4 4 4 0 0 00:06:14.511 asserts 20 20 20 0 n/a 00:06:14.511 00:06:14.511 Elapsed time = 0.001 seconds 00:06:14.511 [2024-02-13 07:06:48.156538] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:06:14.511 [2024-02-13 07:06:48.156586] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:06:14.511 00:06:14.511 real 0m0.032s 00:06:14.511 user 0m0.029s 00:06:14.511 sys 0m0.004s 00:06:14.511 ************************************ 00:06:14.511 END TEST unittest_idxd_user 00:06:14.511 ************************************ 00:06:14.511 07:06:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.511 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:14.770 07:06:48 -- unit/unittest.sh@242 -- # run_test unittest_iscsi unittest_iscsi 00:06:14.770 07:06:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:14.770 07:06:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:14.770 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:14.770 ************************************ 00:06:14.770 START TEST unittest_iscsi 00:06:14.770 ************************************ 00:06:14.770 07:06:48 -- common/autotest_common.sh@1102 -- # unittest_iscsi 00:06:14.770 07:06:48 -- unit/unittest.sh@66 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:06:14.770 00:06:14.770 00:06:14.770 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.770 http://cunit.sourceforge.net/ 00:06:14.770 00:06:14.770 00:06:14.770 Suite: conn_suite 00:06:14.770 Test: read_task_split_in_order_case ...passed 00:06:14.770 Test: read_task_split_reverse_order_case ...passed 00:06:14.770 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:06:14.770 Test: process_non_read_task_completion_test ...passed 00:06:14.770 Test: free_tasks_on_connection ...passed 00:06:14.770 Test: free_tasks_with_queued_datain ...passed 00:06:14.770 Test: abort_queued_datain_task_test ...passed 00:06:14.770 Test: abort_queued_datain_tasks_test ...passed 00:06:14.770 00:06:14.770 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.770 suites 1 1 n/a 0 0 00:06:14.770 tests 8 8 8 0 0 00:06:14.770 asserts 230 230 230 0 n/a 00:06:14.770 00:06:14.770 Elapsed time = 0.000 seconds 00:06:14.770 07:06:48 -- unit/unittest.sh@67 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:06:14.770 00:06:14.770 00:06:14.770 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.770 http://cunit.sourceforge.net/ 00:06:14.770 00:06:14.770 00:06:14.770 Suite: iscsi_suite 00:06:14.770 Test: param_negotiation_test ...passed 00:06:14.770 Test: list_negotiation_test ...passed 00:06:14.770 Test: parse_valid_test ...passed 00:06:14.770 Test: parse_invalid_test ...[2024-02-13 07:06:48.284164] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:06:14.770 [2024-02-13 07:06:48.284516] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:06:14.770 [2024-02-13 07:06:48.284563] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:06:14.770 [2024-02-13 07:06:48.284624] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:06:14.770 [2024-02-13 07:06:48.284771] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:06:14.770 [2024-02-13 07:06:48.284839] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:06:14.770 passed 00:06:14.770 00:06:14.770 [2024-02-13 07:06:48.284970] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:06:14.770 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.770 suites 1 1 n/a 0 0 00:06:14.770 tests 4 4 4 0 0 00:06:14.770 asserts 161 161 161 0 n/a 00:06:14.770 00:06:14.770 Elapsed time = 0.005 seconds 00:06:14.770 07:06:48 -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:06:14.770 00:06:14.770 00:06:14.770 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.770 http://cunit.sourceforge.net/ 00:06:14.770 00:06:14.770 00:06:14.771 Suite: iscsi_target_node_suite 00:06:14.771 Test: add_lun_test_cases ...[2024-02-13 07:06:48.316847] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1248:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:06:14.771 [2024-02-13 07:06:48.317193] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1254:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:06:14.771 [2024-02-13 07:06:48.317299] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:14.771 [2024-02-13 07:06:48.317337] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1260:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:06:14.771 [2024-02-13 07:06:48.317358] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1266:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:06:14.771 passed 00:06:14.771 Test: allow_any_allowed ...passed 00:06:14.771 Test: allow_ipv6_allowed ...passed 00:06:14.771 Test: allow_ipv6_denied ...passed 00:06:14.771 Test: allow_ipv6_invalid ...passed 00:06:14.771 Test: allow_ipv4_allowed ...passed 00:06:14.771 Test: allow_ipv4_denied ...passed 00:06:14.771 Test: allow_ipv4_invalid ...passed 00:06:14.771 Test: node_access_allowed ...passed 00:06:14.771 Test: node_access_denied_by_empty_netmask ...passed 00:06:14.771 Test: node_access_multi_initiator_groups_cases ...passed 00:06:14.771 Test: allow_iscsi_name_multi_maps_case ...passed 00:06:14.771 Test: chap_param_test_cases ...[2024-02-13 07:06:48.317771] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:06:14.771 [2024-02-13 07:06:48.317803] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:06:14.771 passed 00:06:14.771 00:06:14.771 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.771 suites 1 1 n/a 0 0 00:06:14.771 tests 13 13 13 0 0 00:06:14.771 asserts 50 50 50 0 n/a 00:06:14.771 00:06:14.771 Elapsed time = 0.001 seconds 00:06:14.771 [2024-02-13 07:06:48.317849] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:06:14.771 [2024-02-13 07:06:48.317871] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1035:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:06:14.771 [2024-02-13 07:06:48.317897] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1026:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:06:14.771 07:06:48 -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:06:14.771 00:06:14.771 00:06:14.771 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.771 http://cunit.sourceforge.net/ 00:06:14.771 00:06:14.771 00:06:14.771 Suite: iscsi_suite 00:06:14.771 Test: op_login_check_target_test ...passed 00:06:14.771 Test: op_login_session_normal_test ...[2024-02-13 07:06:48.353759] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1434:iscsi_op_login_check_target: *ERROR*: access denied 00:06:14.771 [2024-02-13 07:06:48.354095] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:14.771 [2024-02-13 07:06:48.354136] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:14.771 [2024-02-13 07:06:48.354165] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1626:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:06:14.771 [2024-02-13 07:06:48.354230] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:06:14.771 passed 00:06:14.771 Test: maxburstlength_test ...[2024-02-13 07:06:48.354328] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:14.771 [2024-02-13 07:06:48.354424] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:06:14.771 [2024-02-13 07:06:48.354471] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1467:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:06:14.771 passed 00:06:14.771 Test: underflow_for_read_transfer_test ...passed 00:06:14.771 Test: underflow_for_zero_read_transfer_test ...[2024-02-13 07:06:48.354705] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:14.771 [2024-02-13 07:06:48.354752] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4548:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:06:14.771 passed 00:06:14.771 Test: underflow_for_request_sense_test ...passed 00:06:14.771 Test: underflow_for_check_condition_test ...passed 00:06:14.771 Test: add_transfer_task_test ...passed 00:06:14.771 Test: get_transfer_task_test ...passed 00:06:14.771 Test: del_transfer_task_test ...passed 00:06:14.771 Test: clear_all_transfer_tasks_test ...passed 00:06:14.771 Test: build_iovs_test ...passed 00:06:14.771 Test: build_iovs_with_md_test ...passed 00:06:14.771 Test: pdu_hdr_op_login_test ...[2024-02-13 07:06:48.356149] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1251:iscsi_op_login_rsp_init: *ERROR*: transit error 00:06:14.771 [2024-02-13 07:06:48.356246] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1258:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:06:14.771 [2024-02-13 07:06:48.356328] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1272:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:06:14.771 passed 00:06:14.771 Test: pdu_hdr_op_text_test ...passed 00:06:14.771 Test: pdu_hdr_op_logout_test ...[2024-02-13 07:06:48.356425] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2240:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:14.771 [2024-02-13 07:06:48.356497] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2272:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:06:14.771 [2024-02-13 07:06:48.356529] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2285:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:06:14.771 [2024-02-13 07:06:48.356594] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2515:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:06:14.771 passed 00:06:14.771 Test: pdu_hdr_op_scsi_test ...[2024-02-13 07:06:48.356714] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:14.771 [2024-02-13 07:06:48.356739] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3336:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:06:14.771 [2024-02-13 07:06:48.356778] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3364:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:06:14.771 [2024-02-13 07:06:48.356861] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3397:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:06:14.771 [2024-02-13 07:06:48.356954] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3404:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:06:14.771 [2024-02-13 07:06:48.357135] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3428:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:06:14.771 passed 00:06:14.771 Test: pdu_hdr_op_task_mgmt_test ...[2024-02-13 07:06:48.357219] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3605:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:06:14.771 [2024-02-13 07:06:48.357275] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3694:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:06:14.771 passed 00:06:14.771 Test: pdu_hdr_op_nopout_test ...passed 00:06:14.771 Test: pdu_hdr_op_data_test ...[2024-02-13 07:06:48.357507] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3713:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:06:14.771 [2024-02-13 07:06:48.357583] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:14.772 [2024-02-13 07:06:48.357606] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3735:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:06:14.772 [2024-02-13 07:06:48.357631] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3743:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:06:14.772 [2024-02-13 07:06:48.357666] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4186:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:06:14.772 [2024-02-13 07:06:48.357722] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4203:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:06:14.772 [2024-02-13 07:06:48.357780] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4211:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:06:14.772 [2024-02-13 07:06:48.357822] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4216:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:06:14.772 [2024-02-13 07:06:48.357869] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4222:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:06:14.772 [2024-02-13 07:06:48.357937] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4233:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:06:14.772 [2024-02-13 07:06:48.357962] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4243:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:06:14.772 passed 00:06:14.772 Test: empty_text_with_cbit_test ...passed 00:06:14.772 Test: pdu_payload_read_test ...[2024-02-13 07:06:48.360092] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4631:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:06:14.772 passed 00:06:14.772 Test: data_out_pdu_sequence_test ...passed 00:06:14.772 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:06:14.772 00:06:14.772 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.772 suites 1 1 n/a 0 0 00:06:14.772 tests 24 24 24 0 0 00:06:14.772 asserts 150253 150253 150253 0 n/a 00:06:14.772 00:06:14.772 Elapsed time = 0.016 seconds 00:06:14.772 07:06:48 -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:06:14.772 00:06:14.772 00:06:14.772 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.772 http://cunit.sourceforge.net/ 00:06:14.772 00:06:14.772 00:06:14.772 Suite: init_grp_suite 00:06:14.772 Test: create_initiator_group_success_case ...passed 00:06:14.772 Test: find_initiator_group_success_case ...passed 00:06:14.772 Test: register_initiator_group_twice_case ...passed 00:06:14.772 Test: add_initiator_name_success_case ...passed 00:06:14.772 Test: add_initiator_name_fail_case ...passed 00:06:14.772 Test: delete_all_initiator_names_success_case ...passed 00:06:14.772 Test: add_netmask_success_case ...[2024-02-13 07:06:48.403181] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:06:14.772 passed 00:06:14.772 Test: add_netmask_fail_case ...passed 00:06:14.772 Test: delete_all_netmasks_success_case ...passed 00:06:14.772 Test: initiator_name_overwrite_all_to_any_case ...passed 00:06:14.772 Test: netmask_overwrite_all_to_any_case ...[2024-02-13 07:06:48.403581] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:06:14.772 passed 00:06:14.772 Test: add_delete_initiator_names_case ...passed 00:06:14.772 Test: add_duplicated_initiator_names_case ...passed 00:06:14.772 Test: delete_nonexisting_initiator_names_case ...passed 00:06:14.772 Test: add_delete_netmasks_case ...passed 00:06:14.772 Test: add_duplicated_netmasks_case ...passed 00:06:14.772 Test: delete_nonexisting_netmasks_case ...passed 00:06:14.772 00:06:14.772 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.772 suites 1 1 n/a 0 0 00:06:14.772 tests 17 17 17 0 0 00:06:14.772 asserts 108 108 108 0 n/a 00:06:14.772 00:06:14.772 Elapsed time = 0.001 seconds 00:06:14.772 07:06:48 -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:06:14.772 00:06:14.772 00:06:14.772 CUnit - A unit testing framework for C - Version 2.1-3 00:06:14.772 http://cunit.sourceforge.net/ 00:06:14.772 00:06:14.772 00:06:14.772 Suite: portal_grp_suite 00:06:14.772 Test: portal_create_ipv4_normal_case ...passed 00:06:14.772 Test: portal_create_ipv6_normal_case ...passed 00:06:14.772 Test: portal_create_ipv4_wildcard_case ...passed 00:06:14.772 Test: portal_create_ipv6_wildcard_case ...passed 00:06:14.772 Test: portal_create_twice_case ...[2024-02-13 07:06:48.438352] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:06:14.772 passed 00:06:14.772 Test: portal_grp_register_unregister_case ...passed 00:06:14.772 Test: portal_grp_register_twice_case ...passed 00:06:14.772 Test: portal_grp_add_delete_case ...passed 00:06:14.772 Test: portal_grp_add_delete_twice_case ...passed 00:06:14.772 00:06:14.772 Run Summary: Type Total Ran Passed Failed Inactive 00:06:14.772 suites 1 1 n/a 0 0 00:06:14.772 tests 9 9 9 0 0 00:06:14.772 asserts 44 44 44 0 n/a 00:06:14.772 00:06:14.772 Elapsed time = 0.003 seconds 00:06:15.031 00:06:15.031 real 0m0.234s 00:06:15.031 user 0m0.150s 00:06:15.031 sys 0m0.087s 00:06:15.031 07:06:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.031 ************************************ 00:06:15.031 END TEST unittest_iscsi 00:06:15.031 ************************************ 00:06:15.031 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:15.031 07:06:48 -- unit/unittest.sh@243 -- # run_test unittest_json unittest_json 00:06:15.031 07:06:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:15.031 07:06:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:15.031 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:15.031 ************************************ 00:06:15.031 START TEST unittest_json 00:06:15.031 ************************************ 00:06:15.031 07:06:48 -- common/autotest_common.sh@1102 -- # unittest_json 00:06:15.031 07:06:48 -- unit/unittest.sh@75 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:06:15.031 00:06:15.031 00:06:15.031 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.031 http://cunit.sourceforge.net/ 00:06:15.031 00:06:15.031 00:06:15.031 Suite: json 00:06:15.031 Test: test_parse_literal ...passed 00:06:15.031 Test: test_parse_string_simple ...passed 00:06:15.031 Test: test_parse_string_control_chars ...passed 00:06:15.031 Test: test_parse_string_utf8 ...passed 00:06:15.031 Test: test_parse_string_escapes_twochar ...passed 00:06:15.031 Test: test_parse_string_escapes_unicode ...passed 00:06:15.031 Test: test_parse_number ...passed 00:06:15.031 Test: test_parse_array ...passed 00:06:15.031 Test: test_parse_object ...passed 00:06:15.031 Test: test_parse_nesting ...passed 00:06:15.031 Test: test_parse_comment ...passed 00:06:15.031 00:06:15.031 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.031 suites 1 1 n/a 0 0 00:06:15.031 tests 11 11 11 0 0 00:06:15.031 asserts 1516 1516 1516 0 n/a 00:06:15.031 00:06:15.031 Elapsed time = 0.001 seconds 00:06:15.031 07:06:48 -- unit/unittest.sh@76 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:06:15.031 00:06:15.031 00:06:15.031 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.031 http://cunit.sourceforge.net/ 00:06:15.031 00:06:15.031 00:06:15.031 Suite: json 00:06:15.031 Test: test_strequal ...passed 00:06:15.031 Test: test_num_to_uint16 ...passed 00:06:15.031 Test: test_num_to_int32 ...passed 00:06:15.031 Test: test_num_to_uint64 ...passed 00:06:15.032 Test: test_decode_object ...passed 00:06:15.032 Test: test_decode_array ...passed 00:06:15.032 Test: test_decode_bool ...passed 00:06:15.032 Test: test_decode_uint16 ...passed 00:06:15.032 Test: test_decode_int32 ...passed 00:06:15.032 Test: test_decode_uint32 ...passed 00:06:15.032 Test: test_decode_uint64 ...passed 00:06:15.032 Test: test_decode_string ...passed 00:06:15.032 Test: test_decode_uuid ...passed 00:06:15.032 Test: test_find ...passed 00:06:15.032 Test: test_find_array ...passed 00:06:15.032 Test: test_iterating ...passed 00:06:15.032 Test: test_free_object ...passed 00:06:15.032 00:06:15.032 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.032 suites 1 1 n/a 0 0 00:06:15.032 tests 17 17 17 0 0 00:06:15.032 asserts 236 236 236 0 n/a 00:06:15.032 00:06:15.032 Elapsed time = 0.001 seconds 00:06:15.032 07:06:48 -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:06:15.032 00:06:15.032 00:06:15.032 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.032 http://cunit.sourceforge.net/ 00:06:15.032 00:06:15.032 00:06:15.032 Suite: json 00:06:15.032 Test: test_write_literal ...passed 00:06:15.032 Test: test_write_string_simple ...passed 00:06:15.032 Test: test_write_string_escapes ...passed 00:06:15.032 Test: test_write_string_utf16le ...passed 00:06:15.032 Test: test_write_number_int32 ...passed 00:06:15.032 Test: test_write_number_uint32 ...passed 00:06:15.032 Test: test_write_number_uint128 ...passed 00:06:15.032 Test: test_write_string_number_uint128 ...passed 00:06:15.032 Test: test_write_number_int64 ...passed 00:06:15.032 Test: test_write_number_uint64 ...passed 00:06:15.032 Test: test_write_number_double ...passed 00:06:15.032 Test: test_write_uuid ...passed 00:06:15.032 Test: test_write_array ...passed 00:06:15.032 Test: test_write_object ...passed 00:06:15.032 Test: test_write_nesting ...passed 00:06:15.032 Test: test_write_val ...passed 00:06:15.032 00:06:15.032 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.032 suites 1 1 n/a 0 0 00:06:15.032 tests 16 16 16 0 0 00:06:15.032 asserts 918 918 918 0 n/a 00:06:15.032 00:06:15.032 Elapsed time = 0.005 seconds 00:06:15.032 07:06:48 -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:06:15.032 00:06:15.032 00:06:15.032 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.032 http://cunit.sourceforge.net/ 00:06:15.032 00:06:15.032 00:06:15.032 Suite: jsonrpc 00:06:15.032 Test: test_parse_request ...passed 00:06:15.032 Test: test_parse_request_streaming ...passed 00:06:15.032 00:06:15.032 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.032 suites 1 1 n/a 0 0 00:06:15.032 tests 2 2 2 0 0 00:06:15.032 asserts 289 289 289 0 n/a 00:06:15.032 00:06:15.032 Elapsed time = 0.004 seconds 00:06:15.032 00:06:15.032 real 0m0.138s 00:06:15.032 user 0m0.072s 00:06:15.032 sys 0m0.064s 00:06:15.032 07:06:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.032 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:15.032 ************************************ 00:06:15.032 END TEST unittest_json 00:06:15.032 ************************************ 00:06:15.032 07:06:48 -- unit/unittest.sh@244 -- # run_test unittest_rpc unittest_rpc 00:06:15.032 07:06:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:15.032 07:06:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:15.032 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:15.032 ************************************ 00:06:15.032 START TEST unittest_rpc 00:06:15.032 ************************************ 00:06:15.032 07:06:48 -- common/autotest_common.sh@1102 -- # unittest_rpc 00:06:15.032 07:06:48 -- unit/unittest.sh@82 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:06:15.291 00:06:15.291 00:06:15.291 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.291 http://cunit.sourceforge.net/ 00:06:15.291 00:06:15.291 00:06:15.291 Suite: rpc 00:06:15.291 Test: test_jsonrpc_handler ...passed 00:06:15.291 Test: test_spdk_rpc_is_method_allowed ...passed 00:06:15.291 Test: test_rpc_get_methods ...[2024-02-13 07:06:48.724386] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:06:15.291 passed 00:06:15.291 Test: test_rpc_spdk_get_version ...passed 00:06:15.291 Test: test_spdk_rpc_listen_close ...passed 00:06:15.291 Test: test_rpc_run_multiple_servers ...passed 00:06:15.291 00:06:15.291 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.291 suites 1 1 n/a 0 0 00:06:15.291 tests 6 6 6 0 0 00:06:15.291 asserts 23 23 23 0 n/a 00:06:15.291 00:06:15.291 Elapsed time = 0.001 seconds 00:06:15.291 00:06:15.291 real 0m0.036s 00:06:15.291 user 0m0.000s 00:06:15.291 sys 0m0.034s 00:06:15.291 07:06:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.291 ************************************ 00:06:15.291 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:15.291 END TEST unittest_rpc 00:06:15.291 ************************************ 00:06:15.291 07:06:48 -- unit/unittest.sh@245 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:15.291 07:06:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:15.291 07:06:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:15.291 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:15.291 ************************************ 00:06:15.291 START TEST unittest_notify 00:06:15.291 ************************************ 00:06:15.291 07:06:48 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:06:15.291 00:06:15.291 00:06:15.291 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.291 http://cunit.sourceforge.net/ 00:06:15.291 00:06:15.291 00:06:15.291 Suite: app_suite 00:06:15.291 Test: notify ...passed 00:06:15.291 00:06:15.291 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.291 suites 1 1 n/a 0 0 00:06:15.291 tests 1 1 1 0 0 00:06:15.291 asserts 13 13 13 0 n/a 00:06:15.291 00:06:15.291 Elapsed time = 0.000 seconds 00:06:15.291 00:06:15.291 real 0m0.027s 00:06:15.291 user 0m0.011s 00:06:15.291 sys 0m0.016s 00:06:15.291 07:06:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:15.291 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:15.291 ************************************ 00:06:15.291 END TEST unittest_notify 00:06:15.291 ************************************ 00:06:15.291 07:06:48 -- unit/unittest.sh@246 -- # run_test unittest_nvme unittest_nvme 00:06:15.291 07:06:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:15.291 07:06:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:15.291 07:06:48 -- common/autotest_common.sh@10 -- # set +x 00:06:15.291 ************************************ 00:06:15.291 START TEST unittest_nvme 00:06:15.291 ************************************ 00:06:15.291 07:06:48 -- common/autotest_common.sh@1102 -- # unittest_nvme 00:06:15.291 07:06:48 -- unit/unittest.sh@86 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:06:15.291 00:06:15.291 00:06:15.291 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.291 http://cunit.sourceforge.net/ 00:06:15.291 00:06:15.291 00:06:15.291 Suite: nvme 00:06:15.291 Test: test_opc_data_transfer ...passed 00:06:15.291 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:06:15.291 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:06:15.291 Test: test_trid_parse_and_compare ...[2024-02-13 07:06:48.890285] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1167:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:06:15.291 [2024-02-13 07:06:48.890650] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:15.291 [2024-02-13 07:06:48.890888] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1179:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:06:15.291 [2024-02-13 07:06:48.891029] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:15.291 [2024-02-13 07:06:48.891212] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1190:parse_next_key: *ERROR*: Key without value 00:06:15.291 [2024-02-13 07:06:48.891423] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1224:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:06:15.291 passed 00:06:15.291 Test: test_trid_trtype_str ...passed 00:06:15.291 Test: test_trid_adrfam_str ...passed 00:06:15.291 Test: test_nvme_ctrlr_probe ...[2024-02-13 07:06:48.892344] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:15.291 passed 00:06:15.291 Test: test_spdk_nvme_probe ...[2024-02-13 07:06:48.892730] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:15.291 [2024-02-13 07:06:48.892877] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:15.291 [2024-02-13 07:06:48.892995] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 812:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:06:15.291 [2024-02-13 07:06:48.893148] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:06:15.291 passed 00:06:15.291 Test: test_spdk_nvme_connect ...[2024-02-13 07:06:48.893525] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 989:spdk_nvme_connect: *ERROR*: No transport ID specified 00:06:15.291 [2024-02-13 07:06:48.893990] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:15.291 [2024-02-13 07:06:48.894156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1000:spdk_nvme_connect: *ERROR*: Create probe context failed 00:06:15.291 passed 00:06:15.291 Test: test_nvme_ctrlr_probe_internal ...[2024-02-13 07:06:48.894621] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:06:15.291 [2024-02-13 07:06:48.894773] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:06:15.291 passed 00:06:15.291 Test: test_nvme_init_controllers ...[2024-02-13 07:06:48.895160] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:06:15.291 passed 00:06:15.291 Test: test_nvme_driver_init ...[2024-02-13 07:06:48.895555] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:06:15.291 [2024-02-13 07:06:48.895626] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:06:15.551 [2024-02-13 07:06:49.008206] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:06:15.551 [2024-02-13 07:06:49.008580] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 618:nvme_driver_init: *ERROR*: failed to initialize mutex 00:06:15.551 passed 00:06:15.551 Test: test_spdk_nvme_detach ...passed 00:06:15.551 Test: test_nvme_completion_poll_cb ...passed 00:06:15.551 Test: test_nvme_user_copy_cmd_complete ...passed 00:06:15.551 Test: test_nvme_allocate_request_null ...passed 00:06:15.551 Test: test_nvme_allocate_request ...passed 00:06:15.551 Test: test_nvme_free_request ...passed 00:06:15.551 Test: test_nvme_allocate_request_user_copy ...passed 00:06:15.551 Test: test_nvme_robust_mutex_init_shared ...passed 00:06:15.551 Test: test_nvme_request_check_timeout ...passed 00:06:15.551 Test: test_nvme_wait_for_completion ...passed 00:06:15.551 Test: test_spdk_nvme_parse_func ...passed 00:06:15.551 Test: test_spdk_nvme_detach_async ...passed 00:06:15.551 Test: test_nvme_parse_addr ...[2024-02-13 07:06:49.012376] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1577:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:06:15.551 passed 00:06:15.551 00:06:15.551 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.551 suites 1 1 n/a 0 0 00:06:15.551 tests 25 25 25 0 0 00:06:15.551 asserts 326 326 326 0 n/a 00:06:15.551 00:06:15.551 Elapsed time = 0.007 seconds 00:06:15.551 07:06:49 -- unit/unittest.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:06:15.551 00:06:15.551 00:06:15.551 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.551 http://cunit.sourceforge.net/ 00:06:15.551 00:06:15.551 00:06:15.551 Suite: nvme_ctrlr 00:06:15.551 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-02-13 07:06:49.046256] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.551 passed 00:06:15.551 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-02-13 07:06:49.048032] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.551 passed 00:06:15.551 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-02-13 07:06:49.049350] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.551 passed 00:06:15.551 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-02-13 07:06:49.050739] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.551 passed 00:06:15.551 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-02-13 07:06:49.052058] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.551 [2024-02-13 07:06:49.053241] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-02-13 07:06:49.054457] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-02-13 07:06:49.055709] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:15.551 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-02-13 07:06:49.058292] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.551 [2024-02-13 07:06:49.060636] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-02-13 07:06:49.061890] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:15.551 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-02-13 07:06:49.064513] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.551 [2024-02-13 07:06:49.065774] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-02-13 07:06:49.068229] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3933:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:06:15.551 Test: test_nvme_ctrlr_init_delay ...[2024-02-13 07:06:49.070763] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.551 passed 00:06:15.552 Test: test_alloc_io_qpair_rr_1 ...[2024-02-13 07:06:49.072162] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.552 [2024-02-13 07:06:49.072331] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:15.552 [2024-02-13 07:06:49.072523] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:15.552 passed 00:06:15.552 Test: test_ctrlr_get_default_ctrlr_opts ...[2024-02-13 07:06:49.072588] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:15.552 [2024-02-13 07:06:49.072624] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 385:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:06:15.552 passed 00:06:15.552 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:06:15.552 Test: test_alloc_io_qpair_wrr_1 ...[2024-02-13 07:06:49.072754] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.552 passed 00:06:15.552 Test: test_alloc_io_qpair_wrr_2 ...[2024-02-13 07:06:49.072981] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.552 [2024-02-13 07:06:49.073124] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:06:15.552 passed 00:06:15.552 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-02-13 07:06:49.073402] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4832:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:06:15.552 [2024-02-13 07:06:49.073563] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:15.552 [2024-02-13 07:06:49.073654] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4909:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:06:15.552 [2024-02-13 07:06:49.073733] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4869:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:06:15.552 passed 00:06:15.552 Test: test_nvme_ctrlr_fail ...passed 00:06:15.552 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...[2024-02-13 07:06:49.073791] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:06:15.552 passed 00:06:15.552 Test: test_nvme_ctrlr_set_supported_features ...passed 00:06:15.552 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:06:15.552 Test: test_nvme_ctrlr_test_active_ns ...[2024-02-13 07:06:49.074084] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.812 passed 00:06:15.812 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:06:15.812 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:06:15.812 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:06:15.812 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-02-13 07:06:49.399210] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.812 passed 00:06:15.812 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-02-13 07:06:49.406650] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.812 passed 00:06:15.812 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-02-13 07:06:49.407958] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.812 [2024-02-13 07:06:49.408043] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:2869:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:06:15.812 passed 00:06:15.812 Test: test_alloc_io_qpair_fail ...[2024-02-13 07:06:49.409224] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.812 passed 00:06:15.812 Test: test_nvme_ctrlr_add_remove_process ...passed 00:06:15.812 Test: test_nvme_ctrlr_set_arbitration_feature ...[2024-02-13 07:06:49.409386] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 497:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:06:15.812 passed 00:06:15.812 Test: test_nvme_ctrlr_set_state ...passed 00:06:15.812 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-02-13 07:06:49.409573] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1464:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:06:15.812 [2024-02-13 07:06:49.409617] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.812 passed 00:06:15.812 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-02-13 07:06:49.433594] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.812 passed 00:06:15.812 Test: test_nvme_ctrlr_ns_mgmt ...[2024-02-13 07:06:49.477010] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.812 passed 00:06:15.812 Test: test_nvme_ctrlr_reset ...[2024-02-13 07:06:49.478629] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.812 passed 00:06:15.812 Test: test_nvme_ctrlr_aer_callback ...[2024-02-13 07:06:49.479100] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.812 passed 00:06:15.812 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-02-13 07:06:49.480638] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.812 passed 00:06:15.812 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:06:15.812 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:06:15.812 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-02-13 07:06:49.482542] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.812 passed 00:06:15.812 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:06:15.812 Test: test_nvme_ctrlr_ana_resize ...[2024-02-13 07:06:49.483963] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.812 passed 00:06:15.812 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:06:15.812 Test: test_nvme_transport_ctrlr_ready ...[2024-02-13 07:06:49.485631] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4015:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:06:15.812 passed 00:06:15.812 Test: test_nvme_ctrlr_disable ...[2024-02-13 07:06:49.485701] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 51 (error) 00:06:15.812 [2024-02-13 07:06:49.485761] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4134:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:06:15.812 passed 00:06:15.812 00:06:15.812 Run Summary: Type Total Ran Passed Failed Inactive 00:06:15.812 suites 1 1 n/a 0 0 00:06:15.812 tests 43 43 43 0 0 00:06:15.812 asserts 10418 10418 10418 0 n/a 00:06:15.812 00:06:15.812 Elapsed time = 0.399 seconds 00:06:16.072 07:06:49 -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:06:16.072 00:06:16.072 00:06:16.072 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.072 http://cunit.sourceforge.net/ 00:06:16.072 00:06:16.072 00:06:16.072 Suite: nvme_ctrlr_cmd 00:06:16.072 Test: test_get_log_pages ...passed 00:06:16.072 Test: test_set_feature_cmd ...passed 00:06:16.072 Test: test_set_feature_ns_cmd ...passed 00:06:16.072 Test: test_get_feature_cmd ...passed 00:06:16.072 Test: test_get_feature_ns_cmd ...passed 00:06:16.072 Test: test_abort_cmd ...passed 00:06:16.072 Test: test_set_host_id_cmds ...[2024-02-13 07:06:49.526137] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 502:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:06:16.072 passed 00:06:16.072 Test: test_io_cmd_raw_no_payload_build ...passed 00:06:16.072 Test: test_io_raw_cmd ...passed 00:06:16.072 Test: test_io_raw_cmd_with_md ...passed 00:06:16.072 Test: test_namespace_attach ...passed 00:06:16.072 Test: test_namespace_detach ...passed 00:06:16.072 Test: test_namespace_create ...passed 00:06:16.072 Test: test_namespace_delete ...passed 00:06:16.072 Test: test_doorbell_buffer_config ...passed 00:06:16.072 Test: test_format_nvme ...passed 00:06:16.072 Test: test_fw_commit ...passed 00:06:16.072 Test: test_fw_image_download ...passed 00:06:16.072 Test: test_sanitize ...passed 00:06:16.072 Test: test_directive ...passed 00:06:16.072 Test: test_nvme_request_add_abort ...passed 00:06:16.072 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:06:16.072 Test: test_nvme_ctrlr_cmd_identify ...passed 00:06:16.072 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:06:16.072 00:06:16.072 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.072 suites 1 1 n/a 0 0 00:06:16.072 tests 24 24 24 0 0 00:06:16.072 asserts 198 198 198 0 n/a 00:06:16.072 00:06:16.072 Elapsed time = 0.001 seconds 00:06:16.072 07:06:49 -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:06:16.072 00:06:16.072 00:06:16.072 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.072 http://cunit.sourceforge.net/ 00:06:16.072 00:06:16.072 00:06:16.072 Suite: nvme_ctrlr_cmd 00:06:16.072 Test: test_geometry_cmd ...passed 00:06:16.072 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:06:16.072 00:06:16.072 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.072 suites 1 1 n/a 0 0 00:06:16.072 tests 2 2 2 0 0 00:06:16.072 asserts 7 7 7 0 n/a 00:06:16.072 00:06:16.072 Elapsed time = 0.000 seconds 00:06:16.072 07:06:49 -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:06:16.072 00:06:16.072 00:06:16.072 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.072 http://cunit.sourceforge.net/ 00:06:16.072 00:06:16.072 00:06:16.072 Suite: nvme 00:06:16.072 Test: test_nvme_ns_construct ...passed 00:06:16.072 Test: test_nvme_ns_uuid ...passed 00:06:16.072 Test: test_nvme_ns_csi ...passed 00:06:16.072 Test: test_nvme_ns_data ...passed 00:06:16.072 Test: test_nvme_ns_set_identify_data ...passed 00:06:16.072 Test: test_spdk_nvme_ns_get_values ...passed 00:06:16.072 Test: test_spdk_nvme_ns_is_active ...passed 00:06:16.072 Test: spdk_nvme_ns_supports ...passed 00:06:16.072 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:06:16.072 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:06:16.072 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:06:16.072 Test: test_nvme_ns_find_id_desc ...passed 00:06:16.072 00:06:16.072 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.072 suites 1 1 n/a 0 0 00:06:16.072 tests 12 12 12 0 0 00:06:16.072 asserts 83 83 83 0 n/a 00:06:16.072 00:06:16.072 Elapsed time = 0.000 seconds 00:06:16.072 07:06:49 -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:06:16.072 00:06:16.072 00:06:16.072 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.072 http://cunit.sourceforge.net/ 00:06:16.072 00:06:16.072 00:06:16.072 Suite: nvme_ns_cmd 00:06:16.072 Test: split_test ...passed 00:06:16.072 Test: split_test2 ...passed 00:06:16.072 Test: split_test3 ...passed 00:06:16.072 Test: split_test4 ...passed 00:06:16.072 Test: test_nvme_ns_cmd_flush ...passed 00:06:16.072 Test: test_nvme_ns_cmd_dataset_management ...passed 00:06:16.072 Test: test_nvme_ns_cmd_copy ...passed 00:06:16.072 Test: test_io_flags ...[2024-02-13 07:06:49.623406] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:06:16.072 passed 00:06:16.072 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:06:16.072 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:06:16.072 Test: test_nvme_ns_cmd_reservation_register ...passed 00:06:16.072 Test: test_nvme_ns_cmd_reservation_release ...passed 00:06:16.072 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:06:16.072 Test: test_nvme_ns_cmd_reservation_report ...passed 00:06:16.072 Test: test_cmd_child_request ...passed 00:06:16.072 Test: test_nvme_ns_cmd_readv ...passed 00:06:16.072 Test: test_nvme_ns_cmd_read_with_md ...passed 00:06:16.072 Test: test_nvme_ns_cmd_writev ...[2024-02-13 07:06:49.624570] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 287:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:06:16.072 passed 00:06:16.072 Test: test_nvme_ns_cmd_write_with_md ...passed 00:06:16.072 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:06:16.072 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:06:16.072 Test: test_nvme_ns_cmd_comparev ...passed 00:06:16.072 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:06:16.072 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:06:16.072 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:06:16.072 Test: test_nvme_ns_cmd_setup_request ...passed 00:06:16.072 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:06:16.072 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:06:16.072 Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-02-13 07:06:49.626427] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:16.072 passed 00:06:16.072 Test: test_nvme_ns_cmd_verify ...[2024-02-13 07:06:49.626523] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:06:16.072 passed 00:06:16.072 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:06:16.073 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:06:16.073 00:06:16.073 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.073 suites 1 1 n/a 0 0 00:06:16.073 tests 32 32 32 0 0 00:06:16.073 asserts 550 550 550 0 n/a 00:06:16.073 00:06:16.073 Elapsed time = 0.004 seconds 00:06:16.073 07:06:49 -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:06:16.073 00:06:16.073 00:06:16.073 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.073 http://cunit.sourceforge.net/ 00:06:16.073 00:06:16.073 00:06:16.073 Suite: nvme_ns_cmd 00:06:16.073 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:06:16.073 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:06:16.073 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:06:16.073 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:06:16.073 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:06:16.073 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:06:16.073 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:06:16.073 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:06:16.073 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:06:16.073 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:06:16.073 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:06:16.073 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:06:16.073 00:06:16.073 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.073 suites 1 1 n/a 0 0 00:06:16.073 tests 12 12 12 0 0 00:06:16.073 asserts 123 123 123 0 n/a 00:06:16.073 00:06:16.073 Elapsed time = 0.001 seconds 00:06:16.073 07:06:49 -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:06:16.073 00:06:16.073 00:06:16.073 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.073 http://cunit.sourceforge.net/ 00:06:16.073 00:06:16.073 00:06:16.073 Suite: nvme_qpair 00:06:16.073 Test: test3 ...passed 00:06:16.073 Test: test_ctrlr_failed ...passed 00:06:16.073 Test: struct_packing ...passed 00:06:16.073 Test: test_nvme_qpair_process_completions ...[2024-02-13 07:06:49.694078] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:16.073 [2024-02-13 07:06:49.694422] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:16.073 [2024-02-13 07:06:49.694509] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:06:16.073 passed 00:06:16.073 Test: test_nvme_completion_is_retry ...passed 00:06:16.073 Test: test_get_status_string ...passed 00:06:16.073 Test: test_nvme_qpair_add_cmd_error_injection ...[2024-02-13 07:06:49.694599] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:06:16.073 passed 00:06:16.073 Test: test_nvme_qpair_submit_request ...passed 00:06:16.073 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:06:16.073 Test: test_nvme_qpair_manual_complete_request ...passed 00:06:16.073 Test: test_nvme_qpair_init_deinit ...passed 00:06:16.073 Test: test_nvme_get_sgl_print_info ...passed 00:06:16.073 00:06:16.073 [2024-02-13 07:06:49.695010] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:06:16.073 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.073 suites 1 1 n/a 0 0 00:06:16.073 tests 12 12 12 0 0 00:06:16.073 asserts 154 154 154 0 n/a 00:06:16.073 00:06:16.073 Elapsed time = 0.001 seconds 00:06:16.073 07:06:49 -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:06:16.073 00:06:16.073 00:06:16.073 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.073 http://cunit.sourceforge.net/ 00:06:16.073 00:06:16.073 00:06:16.073 Suite: nvme_pcie 00:06:16.073 Test: test_prp_list_append ...[2024-02-13 07:06:49.727821] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:16.073 [2024-02-13 07:06:49.728143] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1231:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:06:16.073 [2024-02-13 07:06:49.728191] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1221:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:06:16.073 [2024-02-13 07:06:49.728453] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:16.073 passed 00:06:16.073 Test: test_nvme_pcie_hotplug_monitor ...[2024-02-13 07:06:49.728548] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1215:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:06:16.073 passed 00:06:16.073 Test: test_shadow_doorbell_update ...passed 00:06:16.073 Test: test_build_contig_hw_sgl_request ...passed 00:06:16.073 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:06:16.073 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:06:16.073 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:06:16.073 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:06:16.073 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:06:16.073 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...[2024-02-13 07:06:49.728787] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:06:16.073 passed 00:06:16.073 Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-02-13 07:06:49.728885] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:06:16.073 passed 00:06:16.073 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:06:16.073 Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-02-13 07:06:49.728963] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:06:16.073 [2024-02-13 07:06:49.729028] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:06:16.073 passed 00:06:16.073 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:06:16.073 00:06:16.073 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.073 suites 1 1 n/a 0 0 00:06:16.073 tests 14 14 14 0 0 00:06:16.073 asserts 235 235 235 0 n/a 00:06:16.073 00:06:16.073 Elapsed time = 0.001 seconds 00:06:16.073 [2024-02-13 07:06:49.729125] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:06:16.073 07:06:49 -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:06:16.333 00:06:16.333 00:06:16.333 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.333 http://cunit.sourceforge.net/ 00:06:16.333 00:06:16.333 00:06:16.333 Suite: nvme_ns_cmd 00:06:16.333 Test: nvme_poll_group_create_test ...passed 00:06:16.333 Test: nvme_poll_group_add_remove_test ...passed 00:06:16.333 Test: nvme_poll_group_process_completions ...passed 00:06:16.333 Test: nvme_poll_group_destroy_test ...passed 00:06:16.333 Test: nvme_poll_group_get_free_stats ...passed 00:06:16.333 00:06:16.333 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.333 suites 1 1 n/a 0 0 00:06:16.333 tests 5 5 5 0 0 00:06:16.333 asserts 75 75 75 0 n/a 00:06:16.333 00:06:16.333 Elapsed time = 0.001 seconds 00:06:16.333 07:06:49 -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:06:16.333 00:06:16.333 00:06:16.333 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.333 http://cunit.sourceforge.net/ 00:06:16.333 00:06:16.333 00:06:16.333 Suite: nvme_quirks 00:06:16.333 Test: test_nvme_quirks_striping ...passed 00:06:16.333 00:06:16.333 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.333 suites 1 1 n/a 0 0 00:06:16.333 tests 1 1 1 0 0 00:06:16.333 asserts 5 5 5 0 n/a 00:06:16.333 00:06:16.333 Elapsed time = 0.000 seconds 00:06:16.333 07:06:49 -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:06:16.333 00:06:16.333 00:06:16.333 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.333 http://cunit.sourceforge.net/ 00:06:16.333 00:06:16.333 00:06:16.333 Suite: nvme_tcp 00:06:16.333 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:06:16.333 Test: test_nvme_tcp_build_iovs ...passed 00:06:16.333 Test: test_nvme_tcp_build_sgl_request ...[2024-02-13 07:06:49.823438] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 781:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffcfbed5ab0, and the iovcnt=16, remaining_size=28672 00:06:16.333 passed 00:06:16.333 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:06:16.333 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:06:16.333 Test: test_nvme_tcp_req_complete_safe ...passed 00:06:16.333 Test: test_nvme_tcp_req_get ...passed 00:06:16.333 Test: test_nvme_tcp_req_init ...passed 00:06:16.333 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:06:16.333 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:06:16.333 Test: test_nvme_tcp_qpair_set_recv_state ...passed[2024-02-13 07:06:49.825286] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfbed77d0 is same with the state(6) to be set 00:06:16.333 00:06:16.333 Test: test_nvme_tcp_alloc_reqs ...passed 00:06:16.333 Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-02-13 07:06:49.826009] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfbed6960 is same with the state(5) to be set 00:06:16.333 passed 00:06:16.333 Test: test_nvme_tcp_pdu_ch_handle ...[2024-02-13 07:06:49.826076] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1106:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffcfbed7490 00:06:16.333 [2024-02-13 07:06:49.826575] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1165:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:06:16.333 [2024-02-13 07:06:49.826697] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfbed6e20 is same with the state(5) to be set 00:06:16.333 [2024-02-13 07:06:49.826773] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1116:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:06:16.333 [2024-02-13 07:06:49.827135] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfbed6e20 is same with the state(5) to be set 00:06:16.333 [2024-02-13 07:06:49.827414] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1157:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:06:16.334 [2024-02-13 07:06:49.827454] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfbed6e20 is same with the state(5) to be set 00:06:16.334 [2024-02-13 07:06:49.827494] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfbed6e20 is same with the state(5) to be set 00:06:16.334 [2024-02-13 07:06:49.827802] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfbed6e20 is same with the state(5) to be set 00:06:16.334 [2024-02-13 07:06:49.827883] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfbed6e20 is same with the state(5) to be set 00:06:16.334 [2024-02-13 07:06:49.828193] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfbed6e20 is same with the state(5) to be set 00:06:16.334 [2024-02-13 07:06:49.828236] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfbed6e20 is same with the state(5) to be set 00:06:16.334 passed 00:06:16.334 Test: test_nvme_tcp_qpair_connect_sock ...[2024-02-13 07:06:49.828944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2237:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:06:16.334 [2024-02-13 07:06:49.828999] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2249:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:16.334 passed 00:06:16.334 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:06:16.334 Test: test_nvme_tcp_c2h_payload_handle ...[2024-02-13 07:06:49.829569] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2249:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:06:16.334 passed 00:06:16.334 Test: test_nvme_tcp_icresp_handle ...[2024-02-13 07:06:49.829676] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1280:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffcfbed6fd0): PDU Sequence Error 00:06:16.334 [2024-02-13 07:06:49.829771] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1506:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:06:16.334 [2024-02-13 07:06:49.830131] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1513:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:06:16.334 [2024-02-13 07:06:49.830177] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfbed6970 is same with the state(5) to be set 00:06:16.334 [2024-02-13 07:06:49.830209] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1522:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:06:16.334 [2024-02-13 07:06:49.830262] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfbed6970 is same with the state(5) to be set 00:06:16.334 [2024-02-13 07:06:49.830596] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfbed6970 is same with the state(0) to be set 00:06:16.334 passed 00:06:16.334 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:06:16.334 Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-02-13 07:06:49.830985] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1280:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffcfbed7490): PDU Sequence Error 00:06:16.334 [2024-02-13 07:06:49.831293] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1583:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffcfbed5c50 00:06:16.334 passed 00:06:16.334 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:06:16.334 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-02-13 07:06:49.831889] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 351:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffcfbed52d0, errno=0, rc=0 00:06:16.334 [2024-02-13 07:06:49.831943] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfbed52d0 is same with the state(5) to be set 00:06:16.334 [2024-02-13 07:06:49.832415] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 320:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffcfbed52d0 is same with the state(5) to be set 00:06:16.334 [2024-02-13 07:06:49.832481] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffcfbed52d0 (0): Success 00:06:16.334 passed 00:06:16.334 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-02-13 07:06:49.832521] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2096:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffcfbed52d0 (0): Success 00:06:16.334 [2024-02-13 07:06:49.950544] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2420:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:16.334 [2024-02-13 07:06:49.950698] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2420:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:16.334 passed 00:06:16.334 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:06:16.334 Test: test_nvme_tcp_poll_group_get_stats ...[2024-02-13 07:06:49.951364] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2847:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:16.334 [2024-02-13 07:06:49.951413] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2847:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:16.334 passed 00:06:16.334 Test: test_nvme_tcp_ctrlr_construct ...[2024-02-13 07:06:49.951933] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2420:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:16.334 [2024-02-13 07:06:49.952003] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2594:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:16.334 [2024-02-13 07:06:49.952108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2237:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:06:16.334 [2024-02-13 07:06:49.952183] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2594:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:16.334 [2024-02-13 07:06:49.952606] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2287:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x613000001540 with addr=192.168.1.78, port=23 00:06:16.334 [2024-02-13 07:06:49.952697] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2594:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:06:16.334 passed 00:06:16.334 Test: test_nvme_tcp_qpair_submit_request ...[2024-02-13 07:06:49.953366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 781:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x613000001a80, and the iovcnt=1, remaining_size=1024 00:06:16.334 [2024-02-13 07:06:49.953426] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 959:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:06:16.334 passed 00:06:16.334 00:06:16.334 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.334 suites 1 1 n/a 0 0 00:06:16.334 tests 27 27 27 0 0 00:06:16.334 asserts 624 624 624 0 n/a 00:06:16.334 00:06:16.334 Elapsed time = 0.130 seconds 00:06:16.334 07:06:49 -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:06:16.334 00:06:16.334 00:06:16.334 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.334 http://cunit.sourceforge.net/ 00:06:16.334 00:06:16.334 00:06:16.334 Suite: nvme_transport 00:06:16.334 Test: test_nvme_get_transport ...passed 00:06:16.334 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:06:16.334 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:06:16.334 Test: test_nvme_transport_poll_group_add_remove ...passed 00:06:16.334 Test: test_ctrlr_get_memory_domains ...passed 00:06:16.334 00:06:16.334 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.334 suites 1 1 n/a 0 0 00:06:16.334 tests 5 5 5 0 0 00:06:16.334 asserts 28 28 28 0 n/a 00:06:16.334 00:06:16.334 Elapsed time = 0.000 seconds 00:06:16.334 07:06:50 -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:06:16.334 00:06:16.334 00:06:16.334 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.334 http://cunit.sourceforge.net/ 00:06:16.334 00:06:16.334 00:06:16.334 Suite: nvme_io_msg 00:06:16.334 Test: test_nvme_io_msg_send ...passed 00:06:16.334 Test: test_nvme_io_msg_process ...passed 00:06:16.334 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:06:16.334 00:06:16.334 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.334 suites 1 1 n/a 0 0 00:06:16.334 tests 3 3 3 0 0 00:06:16.334 asserts 56 56 56 0 n/a 00:06:16.334 00:06:16.334 Elapsed time = 0.000 seconds 00:06:16.593 07:06:50 -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:06:16.593 00:06:16.593 00:06:16.593 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.593 http://cunit.sourceforge.net/ 00:06:16.593 00:06:16.593 00:06:16.593 Suite: nvme_pcie_common 00:06:16.593 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-02-13 07:06:50.053342] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:06:16.593 passed 00:06:16.593 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:06:16.593 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:06:16.593 Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-02-13 07:06:50.054073] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 503:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:06:16.593 passed 00:06:16.594 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-02-13 07:06:50.054180] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 456:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:06:16.594 [2024-02-13 07:06:50.054209] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 550:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:06:16.594 passed 00:06:16.594 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:06:16.594 00:06:16.594 [2024-02-13 07:06:50.054637] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1793:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:16.594 [2024-02-13 07:06:50.054684] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1793:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:16.594 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.594 suites 1 1 n/a 0 0 00:06:16.594 tests 6 6 6 0 0 00:06:16.594 asserts 148 148 148 0 n/a 00:06:16.594 00:06:16.594 Elapsed time = 0.001 seconds 00:06:16.594 07:06:50 -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:06:16.594 00:06:16.594 00:06:16.594 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.594 http://cunit.sourceforge.net/ 00:06:16.594 00:06:16.594 00:06:16.594 Suite: nvme_fabric 00:06:16.594 Test: test_nvme_fabric_prop_set_cmd ...passed 00:06:16.594 Test: test_nvme_fabric_prop_get_cmd ...passed 00:06:16.594 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:06:16.594 Test: test_nvme_fabric_discover_probe ...passed 00:06:16.594 Test: test_nvme_fabric_qpair_connect ...[2024-02-13 07:06:50.087923] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:06:16.594 passed 00:06:16.594 00:06:16.594 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.594 suites 1 1 n/a 0 0 00:06:16.594 tests 5 5 5 0 0 00:06:16.594 asserts 60 60 60 0 n/a 00:06:16.594 00:06:16.594 Elapsed time = 0.001 seconds 00:06:16.594 07:06:50 -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:06:16.594 00:06:16.594 00:06:16.594 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.594 http://cunit.sourceforge.net/ 00:06:16.594 00:06:16.594 00:06:16.594 Suite: nvme_opal 00:06:16.594 Test: test_opal_nvme_security_recv_send_done ...passed 00:06:16.594 Test: test_opal_add_short_atom_header ...passed 00:06:16.594 00:06:16.594 [2024-02-13 07:06:50.120296] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:06:16.594 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.594 suites 1 1 n/a 0 0 00:06:16.594 tests 2 2 2 0 0 00:06:16.594 asserts 22 22 22 0 n/a 00:06:16.594 00:06:16.594 Elapsed time = 0.000 seconds 00:06:16.594 00:06:16.594 real 0m1.262s 00:06:16.594 user 0m0.711s 00:06:16.594 sys 0m0.394s 00:06:16.594 07:06:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.594 07:06:50 -- common/autotest_common.sh@10 -- # set +x 00:06:16.594 ************************************ 00:06:16.594 END TEST unittest_nvme 00:06:16.594 ************************************ 00:06:16.594 07:06:50 -- unit/unittest.sh@247 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:16.594 07:06:50 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:16.594 07:06:50 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:16.594 07:06:50 -- common/autotest_common.sh@10 -- # set +x 00:06:16.594 ************************************ 00:06:16.594 START TEST unittest_log 00:06:16.594 ************************************ 00:06:16.594 07:06:50 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:06:16.594 00:06:16.594 00:06:16.594 CUnit - A unit testing framework for C - Version 2.1-3 00:06:16.594 http://cunit.sourceforge.net/ 00:06:16.594 00:06:16.594 00:06:16.594 Suite: log 00:06:16.594 Test: log_test ...[2024-02-13 07:06:50.202469] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:06:16.594 [2024-02-13 07:06:50.202731] log_ut.c: 57:log_test: *DEBUG*: log test 00:06:16.594 log dump test: 00:06:16.594 passed 00:06:16.594 Test: deprecation ...00000000 6c 6f 67 20 64 75 6d 70 log dump 00:06:16.594 spdk dump test: 00:06:16.594 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:06:16.594 spdk dump test: 00:06:16.594 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:06:16.594 00000010 65 20 63 68 61 72 73 e chars 00:06:17.530 passed 00:06:17.530 00:06:17.530 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.530 suites 1 1 n/a 0 0 00:06:17.530 tests 2 2 2 0 0 00:06:17.530 asserts 73 73 73 0 n/a 00:06:17.530 00:06:17.530 Elapsed time = 0.001 seconds 00:06:17.790 00:06:17.790 real 0m1.032s 00:06:17.790 user 0m0.028s 00:06:17.790 sys 0m0.004s 00:06:17.790 07:06:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.790 ************************************ 00:06:17.790 END TEST unittest_log 00:06:17.790 ************************************ 00:06:17.790 07:06:51 -- common/autotest_common.sh@10 -- # set +x 00:06:17.790 07:06:51 -- unit/unittest.sh@248 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:17.790 07:06:51 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:17.790 07:06:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:17.790 07:06:51 -- common/autotest_common.sh@10 -- # set +x 00:06:17.790 ************************************ 00:06:17.790 START TEST unittest_lvol 00:06:17.790 ************************************ 00:06:17.790 07:06:51 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:06:17.790 00:06:17.790 00:06:17.790 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.790 http://cunit.sourceforge.net/ 00:06:17.790 00:06:17.790 00:06:17.790 Suite: lvol 00:06:17.790 Test: lvs_init_unload_success ...[2024-02-13 07:06:51.296021] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:06:17.790 passed 00:06:17.790 Test: lvs_init_destroy_success ...passed[2024-02-13 07:06:51.296764] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:06:17.790 00:06:17.790 Test: lvs_init_opts_success ...passed 00:06:17.790 Test: lvs_unload_lvs_is_null_fail ...passed 00:06:17.790 Test: lvs_names ...[2024-02-13 07:06:51.297318] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:06:17.790 [2024-02-13 07:06:51.297537] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:06:17.790 [2024-02-13 07:06:51.297618] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:06:17.790 [2024-02-13 07:06:51.297897] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:06:17.790 passed 00:06:17.790 Test: lvol_create_destroy_success ...passed 00:06:17.790 Test: lvol_create_fail ...[2024-02-13 07:06:51.298964] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:06:17.790 [2024-02-13 07:06:51.299176] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:06:17.790 passed 00:06:17.790 Test: lvol_destroy_fail ...[2024-02-13 07:06:51.299809] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:06:17.790 passed 00:06:17.790 Test: lvol_close ...[2024-02-13 07:06:51.300182] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:06:17.790 passed 00:06:17.790 Test: lvol_resize ...[2024-02-13 07:06:51.300263] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:06:17.790 passed 00:06:17.790 Test: lvol_set_read_only ...passed 00:06:17.790 Test: test_lvs_load ...[2024-02-13 07:06:51.301436] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:06:17.790 [2024-02-13 07:06:51.301608] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:06:17.790 passed 00:06:17.790 Test: lvols_load ...[2024-02-13 07:06:51.301927] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:17.790 [2024-02-13 07:06:51.302213] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:06:17.790 passed 00:06:17.790 Test: lvol_open ...passed 00:06:17.790 Test: lvol_snapshot ...passed 00:06:17.790 Test: lvol_snapshot_fail ...[2024-02-13 07:06:51.303761] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:06:17.790 passed 00:06:17.790 Test: lvol_clone ...passed 00:06:17.790 Test: lvol_clone_fail ...[2024-02-13 07:06:51.304983] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:06:17.790 passed 00:06:17.790 Test: lvol_iter_clones ...passed 00:06:17.790 Test: lvol_refcnt ...[2024-02-13 07:06:51.306078] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol f6d62db7-87c6-4a13-a822-aebbfcd44de8 because it is still open 00:06:17.790 passed 00:06:17.791 Test: lvol_names ...[2024-02-13 07:06:51.306610] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:17.791 [2024-02-13 07:06:51.306828] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:17.791 [2024-02-13 07:06:51.307172] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:06:17.791 passed 00:06:17.791 Test: lvol_create_thin_provisioned ...passed 00:06:17.791 Test: lvol_rename ...[2024-02-13 07:06:51.308088] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:17.791 [2024-02-13 07:06:51.308290] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:06:17.791 passed 00:06:17.791 Test: lvs_rename ...[2024-02-13 07:06:51.308805] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:06:17.791 passed 00:06:17.791 Test: lvol_inflate ...[2024-02-13 07:06:51.309317] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:17.791 passed 00:06:17.791 Test: lvol_decouple_parent ...[2024-02-13 07:06:51.309901] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:06:17.791 passed 00:06:17.791 Test: lvol_get_xattr ...passed 00:06:17.791 Test: lvol_esnap_reload ...passed 00:06:17.791 Test: lvol_esnap_create_bad_args ...[2024-02-13 07:06:51.311062] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:06:17.791 [2024-02-13 07:06:51.311196] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:06:17.791 [2024-02-13 07:06:51.311334] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:06:17.791 [2024-02-13 07:06:51.311564] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:06:17.791 [2024-02-13 07:06:51.311793] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:06:17.791 passed 00:06:17.791 Test: lvol_esnap_create_delete ...passed 00:06:17.791 Test: lvol_esnap_load_esnaps ...[2024-02-13 07:06:51.312570] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:06:17.791 passed 00:06:17.791 Test: lvol_esnap_missing ...[2024-02-13 07:06:51.312986] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:17.791 [2024-02-13 07:06:51.313162] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:06:17.791 passed 00:06:17.791 Test: lvol_esnap_hotplug ... 00:06:17.791 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:06:17.791 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:06:17.791 [2024-02-13 07:06:51.314498] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 039e3491-0ca0-4ccf-bc90-10c5f64d73fa: failed to create esnap bs_dev: error -12 00:06:17.791 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:06:17.791 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:06:17.791 [2024-02-13 07:06:51.315078] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 27617288-8f33-4f8a-9d06-37f52179b5e8: failed to create esnap bs_dev: error -12 00:06:17.791 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:06:17.791 [2024-02-13 07:06:51.315445] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 970b7ac4-b662-4ef5-8488-7472cd629700: failed to create esnap bs_dev: error -12 00:06:17.791 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:06:17.791 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:06:17.791 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:06:17.791 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:06:17.791 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:06:17.791 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:06:17.791 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:06:17.791 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:06:17.791 passed 00:06:17.791 Test: lvol_get_by ...passed 00:06:17.791 00:06:17.791 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.791 suites 1 1 n/a 0 0 00:06:17.791 tests 34 34 34 0 0 00:06:17.791 asserts 1439 1439 1439 0 n/a 00:06:17.791 00:06:17.791 Elapsed time = 0.014 seconds 00:06:17.791 00:06:17.791 real 0m0.057s 00:06:17.791 user 0m0.028s 00:06:17.791 sys 0m0.021s 00:06:17.791 07:06:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.791 07:06:51 -- common/autotest_common.sh@10 -- # set +x 00:06:17.791 ************************************ 00:06:17.791 END TEST unittest_lvol 00:06:17.791 ************************************ 00:06:17.791 07:06:51 -- unit/unittest.sh@249 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:17.791 07:06:51 -- unit/unittest.sh@250 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:17.791 07:06:51 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:17.791 07:06:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:17.791 07:06:51 -- common/autotest_common.sh@10 -- # set +x 00:06:17.791 ************************************ 00:06:17.791 START TEST unittest_nvme_rdma 00:06:17.791 ************************************ 00:06:17.791 07:06:51 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:06:17.791 00:06:17.791 00:06:17.791 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.791 http://cunit.sourceforge.net/ 00:06:17.791 00:06:17.791 00:06:17.791 Suite: nvme_rdma 00:06:17.791 Test: test_nvme_rdma_build_sgl_request ...[2024-02-13 07:06:51.400262] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1452:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:06:17.791 [2024-02-13 07:06:51.400681] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1625:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:17.791 [2024-02-13 07:06:51.400876] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1681:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:06:17.791 passed 00:06:17.791 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:06:17.791 Test: test_nvme_rdma_build_contig_request ...[2024-02-13 07:06:51.401202] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1562:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:06:17.791 passed 00:06:17.791 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:06:17.791 Test: test_nvme_rdma_create_reqs ...[2024-02-13 07:06:51.401762] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1004:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:06:17.791 passed 00:06:17.791 Test: test_nvme_rdma_create_rsps ...[2024-02-13 07:06:51.402391] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 922:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:06:17.791 passed 00:06:17.791 Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-02-13 07:06:51.402642] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1819:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:06:17.791 passed 00:06:17.791 Test: test_nvme_rdma_poller_create ...[2024-02-13 07:06:51.402791] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1819:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:06:17.791 passed 00:06:17.791 Test: test_nvme_rdma_qpair_process_cm_event ...[2024-02-13 07:06:51.403175] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 523:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:06:17.791 passed 00:06:17.791 Test: test_nvme_rdma_ctrlr_construct ...passed 00:06:17.791 Test: test_nvme_rdma_req_put_and_get ...passed 00:06:17.791 Test: test_nvme_rdma_req_init ...passed 00:06:17.791 Test: test_nvme_rdma_validate_cm_event ...[2024-02-13 07:06:51.404032] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 614:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:06:17.791 [2024-02-13 07:06:51.404159] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 614:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:06:17.791 passed 00:06:17.791 Test: test_nvme_rdma_qpair_init ...passed 00:06:17.791 Test: test_nvme_rdma_qpair_submit_request ...passed 00:06:17.791 Test: test_nvme_rdma_memory_domain ...[2024-02-13 07:06:51.404750] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 349:nvme_rdma_get_memory_domain: *ERROR*: Failed to create memory domain 00:06:17.791 passed 00:06:17.791 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:06:17.791 Test: test_rdma_get_memory_translation ...[2024-02-13 07:06:51.405178] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1441:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:06:17.791 [2024-02-13 07:06:51.405350] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1452:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:06:17.791 passed 00:06:17.791 Test: test_get_rdma_qpair_from_wc ...passed 00:06:17.791 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:06:17.791 Test: test_nvme_rdma_poll_group_get_stats ...[2024-02-13 07:06:51.405787] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3236:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:17.791 [2024-02-13 07:06:51.405945] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3236:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:06:17.791 passed 00:06:17.791 Test: test_nvme_rdma_qpair_set_poller ...[2024-02-13 07:06:51.406352] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2969:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:17.791 [2024-02-13 07:06:51.406428] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3015:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:06:17.791 [2024-02-13 07:06:51.406604] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 720:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffcaadceee0 on poll group 0x60b0000001a0 00:06:17.791 [2024-02-13 07:06:51.406704] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2969:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2. 00:06:17.791 [2024-02-13 07:06:51.406767] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3015:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil) 00:06:17.791 [2024-02-13 07:06:51.406885] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 720:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7ffcaadceee0 on poll group 0x60b0000001a0 00:06:17.792 [2024-02-13 07:06:51.407054] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 698:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:17.792 passed 00:06:17.792 00:06:17.792 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.792 suites 1 1 n/a 0 0 00:06:17.792 tests 22 22 22 0 0 00:06:17.792 asserts 412 412 412 0 n/a 00:06:17.792 00:06:17.792 Elapsed time = 0.003 seconds 00:06:17.792 ************************************ 00:06:17.792 END TEST unittest_nvme_rdma 00:06:17.792 ************************************ 00:06:17.792 00:06:17.792 real 0m0.037s 00:06:17.792 user 0m0.012s 00:06:17.792 sys 0m0.022s 00:06:17.792 07:06:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.792 07:06:51 -- common/autotest_common.sh@10 -- # set +x 00:06:17.792 07:06:51 -- unit/unittest.sh@251 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:17.792 07:06:51 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:17.792 07:06:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:17.792 07:06:51 -- common/autotest_common.sh@10 -- # set +x 00:06:18.051 ************************************ 00:06:18.051 START TEST unittest_nvmf_transport 00:06:18.051 ************************************ 00:06:18.051 07:06:51 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:06:18.051 00:06:18.051 00:06:18.051 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.051 http://cunit.sourceforge.net/ 00:06:18.051 00:06:18.051 00:06:18.051 Suite: nvmf 00:06:18.051 Test: test_spdk_nvmf_transport_create ...[2024-02-13 07:06:51.498986] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 247:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:06:18.051 [2024-02-13 07:06:51.499332] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 267:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:06:18.051 passed 00:06:18.051 Test: test_nvmf_transport_poll_group_create ...passed 00:06:18.051 Test: test_spdk_nvmf_transport_opts_init ...[2024-02-13 07:06:51.499404] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:06:18.051 [2024-02-13 07:06:51.499514] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 254:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:06:18.051 passed 00:06:18.051 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:06:18.051 00:06:18.051 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.051 suites 1 1 n/a 0 0 00:06:18.051 tests 4 4 4 0 0 00:06:18.051 asserts 49 49 49 0 n/a 00:06:18.051 00:06:18.051 Elapsed time = 0.001 seconds 00:06:18.051 [2024-02-13 07:06:51.499767] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 788:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:06:18.051 [2024-02-13 07:06:51.499874] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 793:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:06:18.051 [2024-02-13 07:06:51.499898] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 798:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:06:18.051 00:06:18.051 real 0m0.040s 00:06:18.051 user 0m0.023s 00:06:18.051 sys 0m0.018s 00:06:18.051 ************************************ 00:06:18.051 END TEST unittest_nvmf_transport 00:06:18.051 ************************************ 00:06:18.051 07:06:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.051 07:06:51 -- common/autotest_common.sh@10 -- # set +x 00:06:18.051 07:06:51 -- unit/unittest.sh@252 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:18.051 07:06:51 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:18.051 07:06:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:18.051 07:06:51 -- common/autotest_common.sh@10 -- # set +x 00:06:18.051 ************************************ 00:06:18.051 START TEST unittest_rdma 00:06:18.051 ************************************ 00:06:18.051 07:06:51 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:06:18.051 00:06:18.051 00:06:18.051 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.051 http://cunit.sourceforge.net/ 00:06:18.051 00:06:18.051 00:06:18.051 Suite: rdma_common 00:06:18.051 Test: test_spdk_rdma_pd ...[2024-02-13 07:06:51.591500] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:18.051 [2024-02-13 07:06:51.591906] /home/vagrant/spdk_repo/spdk/lib/rdma/common.c: 533:spdk_rdma_get_pd: *ERROR*: Failed to get PD 00:06:18.051 passed 00:06:18.051 00:06:18.051 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.051 suites 1 1 n/a 0 0 00:06:18.051 tests 1 1 1 0 0 00:06:18.051 asserts 31 31 31 0 n/a 00:06:18.051 00:06:18.051 Elapsed time = 0.001 seconds 00:06:18.051 00:06:18.051 real 0m0.034s 00:06:18.051 user 0m0.016s 00:06:18.051 sys 0m0.018s 00:06:18.051 ************************************ 00:06:18.051 END TEST unittest_rdma 00:06:18.051 ************************************ 00:06:18.051 07:06:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.051 07:06:51 -- common/autotest_common.sh@10 -- # set +x 00:06:18.051 07:06:51 -- unit/unittest.sh@255 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:18.051 07:06:51 -- unit/unittest.sh@256 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:18.051 07:06:51 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:18.051 07:06:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:18.051 07:06:51 -- common/autotest_common.sh@10 -- # set +x 00:06:18.051 ************************************ 00:06:18.051 START TEST unittest_nvme_cuse 00:06:18.051 ************************************ 00:06:18.051 07:06:51 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut 00:06:18.051 00:06:18.051 00:06:18.051 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.051 http://cunit.sourceforge.net/ 00:06:18.051 00:06:18.051 00:06:18.051 Suite: nvme_cuse 00:06:18.051 Test: test_cuse_nvme_submit_io_read_write ...passed 00:06:18.051 Test: test_cuse_nvme_submit_io_read_write_with_md ...passed 00:06:18.051 Test: test_cuse_nvme_submit_passthru_cmd ...passed 00:06:18.051 Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed 00:06:18.051 Test: test_nvme_cuse_get_cuse_ns_device ...passed 00:06:18.051 Test: test_cuse_nvme_submit_io ...[2024-02-13 07:06:51.682482] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 656:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid 00:06:18.051 passed 00:06:18.051 Test: test_cuse_nvme_reset ...passed 00:06:18.051 Test: test_nvme_cuse_stop ...passed 00:06:18.051 Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed 00:06:18.051 00:06:18.051 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.051 suites 1 1 n/a 0 0 00:06:18.051 tests 9 9 9 0 0 00:06:18.051 asserts 121 121 121 0 n/a 00:06:18.051 00:06:18.051 Elapsed time = 0.002 seconds 00:06:18.051 [2024-02-13 07:06:51.682867] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 341:cuse_nvme_reset: *ERROR*: Namespace reset not supported 00:06:18.051 00:06:18.051 real 0m0.035s 00:06:18.051 user 0m0.016s 00:06:18.051 sys 0m0.020s 00:06:18.051 ************************************ 00:06:18.051 END TEST unittest_nvme_cuse 00:06:18.051 ************************************ 00:06:18.051 07:06:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.051 07:06:51 -- common/autotest_common.sh@10 -- # set +x 00:06:18.312 07:06:51 -- unit/unittest.sh@259 -- # run_test unittest_nvmf unittest_nvmf 00:06:18.312 07:06:51 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:18.312 07:06:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:18.312 07:06:51 -- common/autotest_common.sh@10 -- # set +x 00:06:18.312 ************************************ 00:06:18.312 START TEST unittest_nvmf 00:06:18.312 ************************************ 00:06:18.312 07:06:51 -- common/autotest_common.sh@1102 -- # unittest_nvmf 00:06:18.312 07:06:51 -- unit/unittest.sh@106 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:06:18.312 00:06:18.312 00:06:18.312 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.312 http://cunit.sourceforge.net/ 00:06:18.312 00:06:18.312 00:06:18.312 Suite: nvmf 00:06:18.312 Test: test_get_log_page ...passed 00:06:18.312 Test: test_process_fabrics_cmd ...passed 00:06:18.312 Test: test_connect ...[2024-02-13 07:06:51.773014] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2575:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:06:18.312 [2024-02-13 07:06:51.773929] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 961:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:06:18.312 [2024-02-13 07:06:51.774023] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 795:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:06:18.312 [2024-02-13 07:06:51.774067] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1008:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:06:18.312 [2024-02-13 07:06:51.774092] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 742:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:06:18.312 [2024-02-13 07:06:51.774193] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:06:18.312 [2024-02-13 07:06:51.774257] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 813:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:06:18.312 [2024-02-13 07:06:51.774354] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 819:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:06:18.312 [2024-02-13 07:06:51.774383] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 846:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:06:18.312 [2024-02-13 07:06:51.774481] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:06:18.312 [2024-02-13 07:06:51.774546] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 599:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:06:18.312 [2024-02-13 07:06:51.774848] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 605:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:06:18.312 [2024-02-13 07:06:51.774921] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 611:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:06:18.312 [2024-02-13 07:06:51.775027] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 618:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:06:18.312 [2024-02-13 07:06:51.775105] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 641:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:06:18.312 [2024-02-13 07:06:51.775194] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 242:ctrlr_add_qpair_and_send_rsp: *ERROR*: Got I/O connect with duplicate QID 1 00:06:18.312 [2024-02-13 07:06:51.775307] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 726:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 2, group (nil)) 00:06:18.313 [2024-02-13 07:06:51.775380] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 726:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil)) 00:06:18.313 passed 00:06:18.313 Test: test_get_ns_id_desc_list ...passed 00:06:18.313 Test: test_identify_ns ...[2024-02-13 07:06:51.775575] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2669:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.313 [2024-02-13 07:06:51.775787] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2669:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:06:18.313 passed 00:06:18.313 Test: test_identify_ns_iocs_specific ...[2024-02-13 07:06:51.775899] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2669:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:06:18.313 [2024-02-13 07:06:51.776012] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2669:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.313 passed 00:06:18.313 Test: test_reservation_write_exclusive ...passed 00:06:18.313 Test: test_reservation_exclusive_access ...passed 00:06:18.313 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...[2024-02-13 07:06:51.776296] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2669:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:06:18.313 passed 00:06:18.313 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:06:18.313 Test: test_reservation_notification_log_page ...passed 00:06:18.313 Test: test_get_dif_ctx ...passed 00:06:18.313 Test: test_set_get_features ...[2024-02-13 07:06:51.776798] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1605:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:18.313 [2024-02-13 07:06:51.776852] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1605:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:06:18.313 [2024-02-13 07:06:51.776895] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1616:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:06:18.313 passed 00:06:18.313 Test: test_identify_ctrlr ...passed 00:06:18.313 Test: test_identify_ctrlr_iocs_specific ...[2024-02-13 07:06:51.776925] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1692:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:06:18.313 passed 00:06:18.313 Test: test_custom_admin_cmd ...passed 00:06:18.313 Test: test_fused_compare_and_write ...passed 00:06:18.313 Test: test_multi_async_event_reqs ...passed 00:06:18.313 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:06:18.313 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:06:18.313 Test: test_multi_async_events ...[2024-02-13 07:06:51.777395] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4176:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:06:18.313 [2024-02-13 07:06:51.777446] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4165:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:18.313 [2024-02-13 07:06:51.777494] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4183:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:06:18.313 passed 00:06:18.313 Test: test_rae ...passed 00:06:18.313 Test: test_nvmf_ctrlr_create_destruct ...passed 00:06:18.313 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:06:18.313 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:06:18.313 Test: test_zcopy_read ...[2024-02-13 07:06:51.778028] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4303:nvmf_ctrlr_process_io_cmd: *ERROR*: I/O command sent before CONNECT 00:06:18.313 passed 00:06:18.313 Test: test_zcopy_write ...passed 00:06:18.313 Test: test_nvmf_property_set ...passed 00:06:18.313 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:06:18.313 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:06:18.313 00:06:18.313 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.313 suites 1 1 n/a 0 0 00:06:18.313 tests 30 30 30 0 0 00:06:18.313 asserts 889 889 889 0 n/a 00:06:18.313 00:06:18.313 Elapsed time = 0.005 seconds 00:06:18.313 [2024-02-13 07:06:51.778210] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1903:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:18.313 [2024-02-13 07:06:51.778279] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1903:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:06:18.313 [2024-02-13 07:06:51.778321] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1926:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:06:18.313 [2024-02-13 07:06:51.778344] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1932:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:06:18.313 [2024-02-13 07:06:51.778375] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1944:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:06:18.313 07:06:51 -- unit/unittest.sh@107 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:06:18.313 00:06:18.313 00:06:18.313 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.313 http://cunit.sourceforge.net/ 00:06:18.313 00:06:18.313 00:06:18.313 Suite: nvmf 00:06:18.313 Test: test_get_rw_params ...passed 00:06:18.313 Test: test_lba_in_range ...passed 00:06:18.313 Test: test_get_dif_ctx ...passed 00:06:18.313 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:06:18.313 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...passed 00:06:18.313 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:06:18.313 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-02-13 07:06:51.816193] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 435:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:06:18.313 [2024-02-13 07:06:51.816484] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 443:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:06:18.313 [2024-02-13 07:06:51.816572] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 450:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:06:18.313 [2024-02-13 07:06:51.816616] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 946:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:06:18.313 [2024-02-13 07:06:51.816698] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 953:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:06:18.313 [2024-02-13 07:06:51.816790] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 389:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:06:18.313 [2024-02-13 07:06:51.816816] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 396:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:06:18.313 passed 00:06:18.313 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:06:18.313 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:06:18.313 00:06:18.313 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.313 suites 1 1 n/a 0 0 00:06:18.313 tests 9 9 9 0 0 00:06:18.313 asserts 157 157 157 0 n/a 00:06:18.313 00:06:18.313 Elapsed time = 0.001 seconds 00:06:18.313 [2024-02-13 07:06:51.816872] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 488:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:06:18.313 [2024-02-13 07:06:51.816903] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 495:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:06:18.313 07:06:51 -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:06:18.313 00:06:18.313 00:06:18.313 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.313 http://cunit.sourceforge.net/ 00:06:18.313 00:06:18.313 00:06:18.313 Suite: nvmf 00:06:18.313 Test: test_discovery_log ...passed 00:06:18.313 Test: test_discovery_log_with_filters ...passed 00:06:18.313 00:06:18.313 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.313 suites 1 1 n/a 0 0 00:06:18.313 tests 2 2 2 0 0 00:06:18.313 asserts 238 238 238 0 n/a 00:06:18.313 00:06:18.313 Elapsed time = 0.003 seconds 00:06:18.313 07:06:51 -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:06:18.313 00:06:18.313 00:06:18.313 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.313 http://cunit.sourceforge.net/ 00:06:18.313 00:06:18.313 00:06:18.313 Suite: nvmf 00:06:18.313 Test: nvmf_test_create_subsystem ...[2024-02-13 07:06:51.894409] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:06:18.313 [2024-02-13 07:06:51.894840] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:06:18.313 [2024-02-13 07:06:51.894930] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:06:18.313 [2024-02-13 07:06:51.894965] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:06:18.313 [2024-02-13 07:06:51.894990] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:06:18.313 [2024-02-13 07:06:51.895023] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:06:18.313 [2024-02-13 07:06:51.895127] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:06:18.313 [2024-02-13 07:06:51.895323] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:06:18.313 [2024-02-13 07:06:51.895426] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:06:18.313 [2024-02-13 07:06:51.895461] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:18.313 [2024-02-13 07:06:51.895486] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:06:18.313 passed 00:06:18.313 Test: test_spdk_nvmf_subsystem_add_ns ...passed 00:06:18.313 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:06:18.314 Test: test_reservation_register ...[2024-02-13 07:06:51.895697] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:06:18.314 [2024-02-13 07:06:51.895806] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1734:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:06:18.314 passed 00:06:18.314 Test: test_reservation_register_with_ptpl ...[2024-02-13 07:06:51.896049] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:18.314 [2024-02-13 07:06:51.896184] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2841:nvmf_ns_reservation_register: *ERROR*: No registrant 00:06:18.314 passed 00:06:18.314 Test: test_reservation_acquire_preempt_1 ...[2024-02-13 07:06:51.897285] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:18.314 passed 00:06:18.314 Test: test_reservation_acquire_release_with_ptpl ...passed 00:06:18.314 Test: test_reservation_release ...passed 00:06:18.314 Test: test_reservation_unregister_notification ...[2024-02-13 07:06:51.899946] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:18.314 [2024-02-13 07:06:51.900261] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:18.314 passed 00:06:18.314 Test: test_reservation_release_notification ...passed 00:06:18.314 Test: test_reservation_release_notification_write_exclusive ...[2024-02-13 07:06:51.900527] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:18.314 [2024-02-13 07:06:51.900774] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:18.314 passed 00:06:18.314 Test: test_reservation_clear_notification ...passed 00:06:18.314 Test: test_reservation_preempt_notification ...[2024-02-13 07:06:51.900988] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:18.314 [2024-02-13 07:06:51.901258] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2783:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:06:18.314 passed 00:06:18.314 Test: test_spdk_nvmf_ns_event ...passed 00:06:18.314 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:06:18.314 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:06:18.314 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:06:18.314 Test: test_nvmf_ns_reservation_report ...[2024-02-13 07:06:51.901961] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 260:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:06:18.314 [2024-02-13 07:06:51.902070] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to transport_ut transport 00:06:18.314 passed 00:06:18.314 Test: test_nvmf_nqn_is_valid ...passed 00:06:18.314 Test: test_nvmf_ns_reservation_restore ...passed 00:06:18.314 Test: test_nvmf_subsystem_state_change ...[2024-02-13 07:06:51.902210] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3146:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:06:18.314 [2024-02-13 07:06:51.902329] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:06:18.314 [2024-02-13 07:06:51.902365] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:abe3e4a8-357e-41f6-90e4-fef38336aaa": uuid is not the correct length 00:06:18.314 [2024-02-13 07:06:51.902406] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:06:18.314 [2024-02-13 07:06:51.902522] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2340:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:06:18.314 passed 00:06:18.314 Test: test_nvmf_reservation_custom_ops ...passed 00:06:18.314 00:06:18.314 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.314 suites 1 1 n/a 0 0 00:06:18.314 tests 22 22 22 0 0 00:06:18.314 asserts 405 405 405 0 n/a 00:06:18.314 00:06:18.314 Elapsed time = 0.009 seconds 00:06:18.314 07:06:51 -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:06:18.314 00:06:18.314 00:06:18.314 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.314 http://cunit.sourceforge.net/ 00:06:18.314 00:06:18.314 00:06:18.314 Suite: nvmf 00:06:18.314 Test: test_nvmf_tcp_create ...[2024-02-13 07:06:51.974092] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 730:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:06:18.314 passed 00:06:18.574 Test: test_nvmf_tcp_destroy ...passed 00:06:18.574 Test: test_nvmf_tcp_poll_group_create ...passed 00:06:18.574 Test: test_nvmf_tcp_send_c2h_data ...passed 00:06:18.574 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:06:18.574 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:06:18.574 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:06:18.574 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-02-13 07:06:52.076961] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:18.574 [2024-02-13 07:06:52.077050] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bae0 is same with the state(5) to be set 00:06:18.574 passed 00:06:18.574 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:06:18.574 Test: test_nvmf_tcp_icreq_handle ...passed 00:06:18.574 Test: test_nvmf_tcp_check_xfer_type ...passed 00:06:18.574 Test: test_nvmf_tcp_invalid_sgl ...[2024-02-13 07:06:52.077144] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bae0 is same with the state(5) to be set 00:06:18.574 [2024-02-13 07:06:52.077180] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:18.574 [2024-02-13 07:06:52.077203] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bae0 is same with the state(5) to be set 00:06:18.574 [2024-02-13 07:06:52.077315] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:18.574 [2024-02-13 07:06:52.077406] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:18.574 [2024-02-13 07:06:52.077455] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bae0 is same with the state(5) to be set 00:06:18.574 [2024-02-13 07:06:52.077477] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2089:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:06:18.574 [2024-02-13 07:06:52.077503] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bae0 is same with the state(5) to be set 00:06:18.574 [2024-02-13 07:06:52.077524] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:18.574 [2024-02-13 07:06:52.077552] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bae0 is same with the state(5) to be set 00:06:18.574 [2024-02-13 07:06:52.077581] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2 00:06:18.574 [2024-02-13 07:06:52.077625] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bae0 is same with the state(5) to be set 00:06:18.574 [2024-02-13 07:06:52.077713] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2484:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:06:18.574 passed 00:06:18.574 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-02-13 07:06:52.077757] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:18.574 [2024-02-13 07:06:52.077778] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bae0 is same with the state(5) to be set 00:06:18.574 [2024-02-13 07:06:52.077832] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2216:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffec319c840 00:06:18.574 [2024-02-13 07:06:52.077911] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:18.574 [2024-02-13 07:06:52.077960] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bfa0 is same with the state(5) to be set 00:06:18.574 [2024-02-13 07:06:52.077994] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2273:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffec319bfa0 00:06:18.574 [2024-02-13 07:06:52.078018] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:18.574 [2024-02-13 07:06:52.078045] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bfa0 is same with the state(5) to be set 00:06:18.574 [2024-02-13 07:06:52.078071] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2226:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:06:18.574 [2024-02-13 07:06:52.078109] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:18.574 [2024-02-13 07:06:52.078145] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bfa0 is same with the state(5) to be set 00:06:18.574 [2024-02-13 07:06:52.078187] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2265:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:06:18.574 [2024-02-13 07:06:52.078221] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:18.574 [2024-02-13 07:06:52.078275] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bfa0 is same with the state(5) to be set 00:06:18.574 passed 00:06:18.574 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-02-13 07:06:52.078302] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:18.574 [2024-02-13 07:06:52.078329] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bfa0 is same with the state(5) to be set 00:06:18.574 [2024-02-13 07:06:52.078381] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:18.574 [2024-02-13 07:06:52.078416] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bfa0 is same with the state(5) to be set 00:06:18.574 [2024-02-13 07:06:52.078479] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:18.575 [2024-02-13 07:06:52.078500] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bfa0 is same with the state(5) to be set 00:06:18.575 [2024-02-13 07:06:52.078537] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:18.575 [2024-02-13 07:06:52.078564] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bfa0 is same with the state(5) to be set 00:06:18.575 [2024-02-13 07:06:52.078623] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:18.575 [2024-02-13 07:06:52.078645] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bfa0 is same with the state(5) to be set 00:06:18.575 [2024-02-13 07:06:52.078687] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1070:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2 00:06:18.575 [2024-02-13 07:06:52.078708] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffec319bfa0 is same with the state(5) to be set 00:06:18.575 passed 00:06:18.575 Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-02-13 07:06:52.098676] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:06:18.575 [2024-02-13 07:06:52.098766] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:06:18.575 passed 00:06:18.575 Test: test_nvmf_tcp_tls_generate_retained_psk ...passed 00:06:18.575 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed[2024-02-13 07:06:52.098988] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:06:18.575 [2024-02-13 07:06:52.099016] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:06:18.575 [2024-02-13 07:06:52.099155] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:06:18.575 [2024-02-13 07:06:52.099181] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:06:18.575 00:06:18.575 00:06:18.575 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.575 suites 1 1 n/a 0 0 00:06:18.575 tests 17 17 17 0 0 00:06:18.575 asserts 222 222 222 0 n/a 00:06:18.575 00:06:18.575 Elapsed time = 0.151 seconds 00:06:18.575 07:06:52 -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:06:18.575 00:06:18.575 00:06:18.575 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.575 http://cunit.sourceforge.net/ 00:06:18.575 00:06:18.575 00:06:18.575 Suite: nvmf 00:06:18.575 Test: test_nvmf_tgt_create_poll_group ...passed 00:06:18.575 00:06:18.575 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.575 suites 1 1 n/a 0 0 00:06:18.575 tests 1 1 1 0 0 00:06:18.575 asserts 17 17 17 0 n/a 00:06:18.575 00:06:18.575 Elapsed time = 0.021 seconds 00:06:18.575 00:06:18.575 real 0m0.495s 00:06:18.575 user 0m0.222s 00:06:18.575 sys 0m0.274s 00:06:18.575 ************************************ 00:06:18.575 END TEST unittest_nvmf 00:06:18.575 ************************************ 00:06:18.575 07:06:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.575 07:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:18.834 07:06:52 -- unit/unittest.sh@260 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:18.834 07:06:52 -- unit/unittest.sh@265 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:18.834 07:06:52 -- unit/unittest.sh@266 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:18.834 07:06:52 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:18.834 07:06:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:18.834 07:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:18.834 ************************************ 00:06:18.834 START TEST unittest_nvmf_rdma 00:06:18.834 ************************************ 00:06:18.834 07:06:52 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:06:18.834 00:06:18.834 00:06:18.834 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.834 http://cunit.sourceforge.net/ 00:06:18.834 00:06:18.834 00:06:18.834 Suite: nvmf 00:06:18.834 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-02-13 07:06:52.333147] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1916:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:06:18.834 [2024-02-13 07:06:52.334455] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:06:18.834 [2024-02-13 07:06:52.334866] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1966:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:06:18.834 passed 00:06:18.834 Test: test_spdk_nvmf_rdma_request_process ...passed 00:06:18.834 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:06:18.834 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:06:18.834 Test: test_nvmf_rdma_opts_init ...passed 00:06:18.834 Test: test_nvmf_rdma_request_free_data ...passed 00:06:18.834 Test: test_nvmf_rdma_update_ibv_state ...[2024-02-13 07:06:52.337236] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 614:nvmf_rdma_update_ibv_state: *ERROR*: Failed to get updated RDMA queue pair state! 00:06:18.834 [2024-02-13 07:06:52.337677] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 625:nvmf_rdma_update_ibv_state: *ERROR*: QP#0: bad state updated: 10, maybe hardware issue 00:06:18.834 passed 00:06:18.834 Test: test_nvmf_rdma_resources_create ...passed 00:06:18.834 Test: test_nvmf_rdma_qpair_compare ...passed 00:06:18.834 Test: test_nvmf_rdma_resize_cq ...[2024-02-13 07:06:52.339946] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1006:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:06:18.834 Using CQ of insufficient size may lead to CQ overrun 00:06:18.834 [2024-02-13 07:06:52.340381] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1011:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:06:18.834 [2024-02-13 07:06:52.340799] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1019:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory 00:06:18.834 passed 00:06:18.834 00:06:18.834 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.834 suites 1 1 n/a 0 0 00:06:18.834 tests 10 10 10 0 0 00:06:18.834 asserts 584 584 584 0 n/a 00:06:18.834 00:06:18.834 Elapsed time = 0.005 seconds 00:06:18.834 00:06:18.834 real 0m0.048s 00:06:18.834 user 0m0.017s 00:06:18.834 sys 0m0.029s 00:06:18.834 ************************************ 00:06:18.834 END TEST unittest_nvmf_rdma 00:06:18.834 ************************************ 00:06:18.834 07:06:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.834 07:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:18.834 07:06:52 -- unit/unittest.sh@269 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:18.834 07:06:52 -- unit/unittest.sh@273 -- # run_test unittest_scsi unittest_scsi 00:06:18.834 07:06:52 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:18.834 07:06:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:18.834 07:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:18.834 ************************************ 00:06:18.834 START TEST unittest_scsi 00:06:18.834 ************************************ 00:06:18.834 07:06:52 -- common/autotest_common.sh@1102 -- # unittest_scsi 00:06:18.834 07:06:52 -- unit/unittest.sh@115 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:06:18.834 00:06:18.834 00:06:18.834 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.834 http://cunit.sourceforge.net/ 00:06:18.834 00:06:18.834 00:06:18.834 Suite: dev_suite 00:06:18.834 Test: dev_destruct_null_dev ...passed 00:06:18.834 Test: dev_destruct_zero_luns ...passed 00:06:18.834 Test: dev_destruct_null_lun ...passed 00:06:18.834 Test: dev_destruct_success ...passed 00:06:18.834 Test: dev_construct_num_luns_zero ...[2024-02-13 07:06:52.433805] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:06:18.834 passed 00:06:18.834 Test: dev_construct_no_lun_zero ...passed 00:06:18.834 Test: dev_construct_null_lun ...passed 00:06:18.834 Test: dev_construct_name_too_long ...[2024-02-13 07:06:52.434332] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:06:18.834 [2024-02-13 07:06:52.434419] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:06:18.834 [2024-02-13 07:06:52.434480] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:06:18.834 passed 00:06:18.834 Test: dev_construct_success ...passed 00:06:18.834 Test: dev_construct_success_lun_zero_not_first ...passed 00:06:18.834 Test: dev_queue_mgmt_task_success ...passed 00:06:18.834 Test: dev_queue_task_success ...passed 00:06:18.834 Test: dev_stop_success ...passed 00:06:18.834 Test: dev_add_port_max_ports ...passed 00:06:18.834 Test: dev_add_port_construct_failure1 ...[2024-02-13 07:06:52.434877] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:06:18.834 passed 00:06:18.834 Test: dev_add_port_construct_failure2 ...passed 00:06:18.834 Test: dev_add_port_success1 ...passed 00:06:18.834 Test: dev_add_port_success2 ...passed 00:06:18.834 Test: dev_add_port_success3 ...passed 00:06:18.834 Test: dev_find_port_by_id_num_ports_zero ...passed 00:06:18.834 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:06:18.835 Test: dev_find_port_by_id_success ...passed 00:06:18.835 Test: dev_add_lun_bdev_not_found ...passed 00:06:18.835 Test: dev_add_lun_no_free_lun_id ...[2024-02-13 07:06:52.434961] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:06:18.835 [2024-02-13 07:06:52.435033] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:06:18.835 passed 00:06:18.835 Test: dev_add_lun_success1 ...passed 00:06:18.835 Test: dev_add_lun_success2 ...passed 00:06:18.835 Test: dev_check_pending_tasks ...passed 00:06:18.835 Test: dev_iterate_luns ...passed 00:06:18.835 Test: dev_find_free_lun ...[2024-02-13 07:06:52.435398] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:06:18.835 passed 00:06:18.835 00:06:18.835 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.835 suites 1 1 n/a 0 0 00:06:18.835 tests 29 29 29 0 0 00:06:18.835 asserts 97 97 97 0 n/a 00:06:18.835 00:06:18.835 Elapsed time = 0.002 seconds 00:06:18.835 07:06:52 -- unit/unittest.sh@116 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:06:18.835 00:06:18.835 00:06:18.835 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.835 http://cunit.sourceforge.net/ 00:06:18.835 00:06:18.835 00:06:18.835 Suite: lun_suite 00:06:18.835 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:06:18.835 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-02-13 07:06:52.472307] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:06:18.835 [2024-02-13 07:06:52.472678] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:06:18.835 passed 00:06:18.835 Test: lun_task_mgmt_execute_lun_reset ...passed 00:06:18.835 Test: lun_task_mgmt_execute_target_reset ...passed 00:06:18.835 Test: lun_task_mgmt_execute_invalid_case ...passed 00:06:18.835 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:06:18.835 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:06:18.835 Test: lun_append_task_null_lun_not_supported ...passed 00:06:18.835 Test: lun_execute_scsi_task_pending ...passed 00:06:18.835 Test: lun_execute_scsi_task_complete ...passed 00:06:18.835 Test: lun_execute_scsi_task_resize ...passed 00:06:18.835 Test: lun_destruct_success ...passed 00:06:18.835 Test: lun_construct_null_ctx ...passed 00:06:18.835 Test: lun_construct_success ...passed[2024-02-13 07:06:52.472841] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:06:18.835 [2024-02-13 07:06:52.473051] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:06:18.835 00:06:18.835 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:06:18.835 Test: lun_reset_task_suspend_scsi_task ...passed 00:06:18.835 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:06:18.835 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:06:18.835 00:06:18.835 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.835 suites 1 1 n/a 0 0 00:06:18.835 tests 18 18 18 0 0 00:06:18.835 asserts 153 153 153 0 n/a 00:06:18.835 00:06:18.835 Elapsed time = 0.001 seconds 00:06:18.835 07:06:52 -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:06:18.835 00:06:18.835 00:06:18.835 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.835 http://cunit.sourceforge.net/ 00:06:18.835 00:06:18.835 00:06:18.835 Suite: scsi_suite 00:06:18.835 Test: scsi_init ...passed 00:06:18.835 00:06:18.835 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.835 suites 1 1 n/a 0 0 00:06:18.835 tests 1 1 1 0 0 00:06:18.835 asserts 1 1 1 0 n/a 00:06:18.835 00:06:18.835 Elapsed time = 0.000 seconds 00:06:19.094 07:06:52 -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:06:19.094 00:06:19.094 00:06:19.094 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.094 http://cunit.sourceforge.net/ 00:06:19.094 00:06:19.094 00:06:19.094 Suite: translation_suite 00:06:19.094 Test: mode_select_6_test ...passed 00:06:19.094 Test: mode_select_6_test2 ...passed 00:06:19.094 Test: mode_sense_6_test ...passed 00:06:19.094 Test: mode_sense_10_test ...passed 00:06:19.094 Test: inquiry_evpd_test ...passed 00:06:19.094 Test: inquiry_standard_test ...passed 00:06:19.094 Test: inquiry_overflow_test ...passed 00:06:19.094 Test: task_complete_test ...passed 00:06:19.094 Test: lba_range_test ...passed 00:06:19.094 Test: xfer_len_test ...passed 00:06:19.095 Test: xfer_test ...[2024-02-13 07:06:52.542794] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:06:19.095 passed 00:06:19.095 Test: scsi_name_padding_test ...passed 00:06:19.095 Test: get_dif_ctx_test ...passed 00:06:19.095 Test: unmap_split_test ...passed 00:06:19.095 00:06:19.095 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.095 suites 1 1 n/a 0 0 00:06:19.095 tests 14 14 14 0 0 00:06:19.095 asserts 1204 1204 1204 0 n/a 00:06:19.095 00:06:19.095 Elapsed time = 0.004 seconds 00:06:19.095 07:06:52 -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:06:19.095 00:06:19.095 00:06:19.095 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.095 http://cunit.sourceforge.net/ 00:06:19.095 00:06:19.095 00:06:19.095 Suite: reservation_suite 00:06:19.095 Test: test_reservation_register ...passed 00:06:19.095 Test: test_reservation_reserve ...[2024-02-13 07:06:52.572696] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:19.095 [2024-02-13 07:06:52.573119] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:19.095 passed 00:06:19.095 Test: test_reservation_preempt_non_all_regs ...passed 00:06:19.095 Test: test_reservation_preempt_all_regs ...[2024-02-13 07:06:52.573209] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 209:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:06:19.095 [2024-02-13 07:06:52.573319] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 204:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:06:19.095 [2024-02-13 07:06:52.573381] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:19.095 [2024-02-13 07:06:52.573455] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 458:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:06:19.095 [2024-02-13 07:06:52.573589] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:19.095 passed 00:06:19.095 Test: test_reservation_cmds_conflict ...passed 00:06:19.095 Test: test_scsi2_reserve_release ...passed 00:06:19.095 Test: test_pr_with_scsi2_reserve_release ...passed 00:06:19.095 00:06:19.095 [2024-02-13 07:06:52.573730] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:19.095 [2024-02-13 07:06:52.573784] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:06:19.095 [2024-02-13 07:06:52.573825] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:19.095 [2024-02-13 07:06:52.573847] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:19.095 [2024-02-13 07:06:52.573878] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:06:19.095 [2024-02-13 07:06:52.573900] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 845:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:06:19.095 [2024-02-13 07:06:52.573984] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 272:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:06:19.095 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.095 suites 1 1 n/a 0 0 00:06:19.095 tests 7 7 7 0 0 00:06:19.095 asserts 257 257 257 0 n/a 00:06:19.095 00:06:19.095 Elapsed time = 0.001 seconds 00:06:19.095 00:06:19.095 real 0m0.176s 00:06:19.095 user 0m0.111s 00:06:19.095 sys 0m0.067s 00:06:19.095 ************************************ 00:06:19.095 END TEST unittest_scsi 00:06:19.095 ************************************ 00:06:19.095 07:06:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.095 07:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:19.095 07:06:52 -- unit/unittest.sh@276 -- # uname -s 00:06:19.095 07:06:52 -- unit/unittest.sh@276 -- # '[' Linux = Linux ']' 00:06:19.095 07:06:52 -- unit/unittest.sh@277 -- # run_test unittest_sock unittest_sock 00:06:19.095 07:06:52 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:19.095 07:06:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:19.095 07:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:19.095 ************************************ 00:06:19.095 START TEST unittest_sock 00:06:19.095 ************************************ 00:06:19.095 07:06:52 -- common/autotest_common.sh@1102 -- # unittest_sock 00:06:19.095 07:06:52 -- unit/unittest.sh@123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut 00:06:19.095 00:06:19.095 00:06:19.095 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.095 http://cunit.sourceforge.net/ 00:06:19.095 00:06:19.095 00:06:19.095 Suite: sock 00:06:19.095 Test: posix_sock ...passed 00:06:19.095 Test: ut_sock ...passed 00:06:19.095 Test: posix_sock_group ...passed 00:06:19.095 Test: ut_sock_group ...passed 00:06:19.095 Test: posix_sock_group_fairness ...passed 00:06:19.095 Test: _posix_sock_close ...passed 00:06:19.095 Test: sock_get_default_opts ...passed 00:06:19.095 Test: ut_sock_impl_get_set_opts ...passed 00:06:19.095 Test: posix_sock_impl_get_set_opts ...passed 00:06:19.095 Test: ut_sock_map ...passed 00:06:19.095 Test: override_impl_opts ...passed 00:06:19.095 Test: ut_sock_group_get_ctx ...passed 00:06:19.095 00:06:19.095 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.095 suites 1 1 n/a 0 0 00:06:19.095 tests 12 12 12 0 0 00:06:19.095 asserts 349 349 349 0 n/a 00:06:19.095 00:06:19.095 Elapsed time = 0.007 seconds 00:06:19.095 07:06:52 -- unit/unittest.sh@124 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut 00:06:19.095 00:06:19.095 00:06:19.095 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.095 http://cunit.sourceforge.net/ 00:06:19.095 00:06:19.095 00:06:19.095 Suite: posix 00:06:19.095 Test: flush ...passed 00:06:19.095 00:06:19.095 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.095 suites 1 1 n/a 0 0 00:06:19.095 tests 1 1 1 0 0 00:06:19.095 asserts 28 28 28 0 n/a 00:06:19.095 00:06:19.095 Elapsed time = 0.000 seconds 00:06:19.095 07:06:52 -- unit/unittest.sh@126 -- # grep -q '#define SPDK_CONFIG_URING 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:19.095 00:06:19.095 real 0m0.097s 00:06:19.095 user 0m0.026s 00:06:19.095 sys 0m0.047s 00:06:19.095 ************************************ 00:06:19.095 END TEST unittest_sock 00:06:19.095 ************************************ 00:06:19.095 07:06:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.095 07:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:19.354 07:06:52 -- unit/unittest.sh@279 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:19.354 07:06:52 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:19.354 07:06:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:19.354 07:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:19.354 ************************************ 00:06:19.354 START TEST unittest_thread 00:06:19.354 ************************************ 00:06:19.354 07:06:52 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:06:19.354 00:06:19.354 00:06:19.354 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.354 http://cunit.sourceforge.net/ 00:06:19.354 00:06:19.354 00:06:19.354 Suite: io_channel 00:06:19.354 Test: thread_alloc ...passed 00:06:19.354 Test: thread_send_msg ...passed 00:06:19.354 Test: thread_poller ...passed 00:06:19.354 Test: poller_pause ...passed 00:06:19.354 Test: thread_for_each ...passed 00:06:19.354 Test: for_each_channel_remove ...passed 00:06:19.355 Test: for_each_channel_unreg ...[2024-02-13 07:06:52.835264] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2163:spdk_io_device_register: *ERROR*: io_device 0x7ffed8702190 already registered (old:0x613000000200 new:0x6130000003c0) 00:06:19.355 passed 00:06:19.355 Test: thread_name ...passed 00:06:19.355 Test: channel ...[2024-02-13 07:06:52.839474] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2297:spdk_get_io_channel: *ERROR*: could not find io_device 0x55fe287a24a0 00:06:19.355 passed 00:06:19.355 Test: channel_destroy_races ...passed 00:06:19.355 Test: thread_exit_test ...[2024-02-13 07:06:52.844977] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 629:thread_exit: *ERROR*: thread 0x618000005c80 got timeout, and move it to the exited state forcefully 00:06:19.355 passed 00:06:19.355 Test: thread_update_stats_test ...passed 00:06:19.355 Test: nested_channel ...passed 00:06:19.355 Test: device_unregister_and_thread_exit_race ...passed 00:06:19.355 Test: cache_closest_timed_poller ...passed 00:06:19.355 Test: multi_timed_pollers_have_same_expiration ...passed 00:06:19.355 Test: io_device_lookup ...passed 00:06:19.355 Test: spdk_spin ...[2024-02-13 07:06:52.856122] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3061:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:19.355 [2024-02-13 07:06:52.856172] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffed8702180 00:06:19.355 [2024-02-13 07:06:52.856251] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3099:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:06:19.355 [2024-02-13 07:06:52.857957] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:06:19.355 [2024-02-13 07:06:52.858036] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffed8702180 00:06:19.355 [2024-02-13 07:06:52.858061] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:19.355 [2024-02-13 07:06:52.858089] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffed8702180 00:06:19.355 [2024-02-13 07:06:52.858112] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:06:19.355 [2024-02-13 07:06:52.858144] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffed8702180 00:06:19.355 [2024-02-13 07:06:52.858167] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3043:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:06:19.355 [2024-02-13 07:06:52.858207] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x7ffed8702180 00:06:19.355 passed 00:06:19.355 Test: for_each_channel_and_thread_exit_race ...passed 00:06:19.355 Test: for_each_thread_and_thread_exit_race ...passed 00:06:19.355 00:06:19.355 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.355 suites 1 1 n/a 0 0 00:06:19.355 tests 20 20 20 0 0 00:06:19.355 asserts 409 409 409 0 n/a 00:06:19.355 00:06:19.355 Elapsed time = 0.051 seconds 00:06:19.355 00:06:19.355 real 0m0.090s 00:06:19.355 user 0m0.062s 00:06:19.355 sys 0m0.027s 00:06:19.355 07:06:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.355 ************************************ 00:06:19.355 END TEST unittest_thread 00:06:19.355 ************************************ 00:06:19.355 07:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:19.355 07:06:52 -- unit/unittest.sh@280 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:19.355 07:06:52 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:19.355 07:06:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:19.355 07:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:19.355 ************************************ 00:06:19.355 START TEST unittest_iobuf 00:06:19.355 ************************************ 00:06:19.355 07:06:52 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:06:19.355 00:06:19.355 00:06:19.355 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.355 http://cunit.sourceforge.net/ 00:06:19.355 00:06:19.355 00:06:19.355 Suite: io_channel 00:06:19.355 Test: iobuf ...passed 00:06:19.355 Test: iobuf_cache ...[2024-02-13 07:06:52.963833] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 311:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:19.355 [2024-02-13 07:06:52.964132] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:19.355 [2024-02-13 07:06:52.964258] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 323:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf large buffer cache. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:06:19.355 [2024-02-13 07:06:52.964291] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 326:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:19.355 [2024-02-13 07:06:52.964353] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 311:spdk_iobuf_channel_init: *ERROR*: Failed to populate iobuf small buffer cache. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:06:19.355 [2024-02-13 07:06:52.964386] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 314:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:06:19.355 passed 00:06:19.355 00:06:19.355 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.355 suites 1 1 n/a 0 0 00:06:19.355 tests 2 2 2 0 0 00:06:19.355 asserts 107 107 107 0 n/a 00:06:19.355 00:06:19.355 Elapsed time = 0.006 seconds 00:06:19.355 00:06:19.355 real 0m0.042s 00:06:19.355 user 0m0.029s 00:06:19.355 sys 0m0.013s 00:06:19.355 ************************************ 00:06:19.355 END TEST unittest_iobuf 00:06:19.355 ************************************ 00:06:19.355 07:06:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:19.355 07:06:52 -- common/autotest_common.sh@10 -- # set +x 00:06:19.355 07:06:53 -- unit/unittest.sh@281 -- # run_test unittest_util unittest_util 00:06:19.355 07:06:53 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:19.355 07:06:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:19.355 07:06:53 -- common/autotest_common.sh@10 -- # set +x 00:06:19.355 ************************************ 00:06:19.355 START TEST unittest_util 00:06:19.355 ************************************ 00:06:19.355 07:06:53 -- common/autotest_common.sh@1102 -- # unittest_util 00:06:19.355 07:06:53 -- unit/unittest.sh@132 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:06:19.615 00:06:19.615 00:06:19.615 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.615 http://cunit.sourceforge.net/ 00:06:19.615 00:06:19.615 00:06:19.615 Suite: base64 00:06:19.615 Test: test_base64_get_encoded_strlen ...passed 00:06:19.615 Test: test_base64_get_decoded_len ...passed 00:06:19.615 Test: test_base64_encode ...passed 00:06:19.615 Test: test_base64_decode ...passed 00:06:19.615 Test: test_base64_urlsafe_encode ...passed 00:06:19.615 Test: test_base64_urlsafe_decode ...passed 00:06:19.615 00:06:19.615 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.615 suites 1 1 n/a 0 0 00:06:19.615 tests 6 6 6 0 0 00:06:19.615 asserts 112 112 112 0 n/a 00:06:19.615 00:06:19.615 Elapsed time = 0.000 seconds 00:06:19.615 07:06:53 -- unit/unittest.sh@133 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:06:19.615 00:06:19.615 00:06:19.615 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.615 http://cunit.sourceforge.net/ 00:06:19.615 00:06:19.615 00:06:19.615 Suite: bit_array 00:06:19.615 Test: test_1bit ...passed 00:06:19.615 Test: test_64bit ...passed 00:06:19.615 Test: test_find ...passed 00:06:19.615 Test: test_resize ...passed 00:06:19.615 Test: test_errors ...passed 00:06:19.615 Test: test_count ...passed 00:06:19.615 Test: test_mask_store_load ...passed 00:06:19.615 Test: test_mask_clear ...passed 00:06:19.615 00:06:19.615 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.615 suites 1 1 n/a 0 0 00:06:19.615 tests 8 8 8 0 0 00:06:19.615 asserts 5075 5075 5075 0 n/a 00:06:19.615 00:06:19.615 Elapsed time = 0.003 seconds 00:06:19.615 07:06:53 -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:06:19.615 00:06:19.615 00:06:19.615 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.615 http://cunit.sourceforge.net/ 00:06:19.615 00:06:19.615 00:06:19.615 Suite: cpuset 00:06:19.615 Test: test_cpuset ...passed 00:06:19.615 Test: test_cpuset_parse ...[2024-02-13 07:06:53.121564] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 239:parse_list: *ERROR*: Unexpected end of core list '[' 00:06:19.615 [2024-02-13 07:06:53.121969] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:06:19.615 [2024-02-13 07:06:53.122060] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:06:19.615 [2024-02-13 07:06:53.122147] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 219:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:06:19.615 [2024-02-13 07:06:53.122173] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:06:19.615 [2024-02-13 07:06:53.122207] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 241:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:06:19.615 [2024-02-13 07:06:53.122242] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 203:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:06:19.615 [2024-02-13 07:06:53.122298] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 198:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:06:19.615 passed 00:06:19.615 Test: test_cpuset_fmt ...passed 00:06:19.615 00:06:19.615 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.615 suites 1 1 n/a 0 0 00:06:19.615 tests 3 3 3 0 0 00:06:19.615 asserts 65 65 65 0 n/a 00:06:19.615 00:06:19.615 Elapsed time = 0.002 seconds 00:06:19.615 07:06:53 -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:06:19.615 00:06:19.615 00:06:19.615 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.615 http://cunit.sourceforge.net/ 00:06:19.615 00:06:19.615 00:06:19.615 Suite: crc16 00:06:19.615 Test: test_crc16_t10dif ...passed 00:06:19.615 Test: test_crc16_t10dif_seed ...passed 00:06:19.615 Test: test_crc16_t10dif_copy ...passed 00:06:19.615 00:06:19.615 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.615 suites 1 1 n/a 0 0 00:06:19.615 tests 3 3 3 0 0 00:06:19.615 asserts 5 5 5 0 n/a 00:06:19.615 00:06:19.615 Elapsed time = 0.000 seconds 00:06:19.615 07:06:53 -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:06:19.615 00:06:19.615 00:06:19.615 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.615 http://cunit.sourceforge.net/ 00:06:19.615 00:06:19.615 00:06:19.615 Suite: crc32_ieee 00:06:19.615 Test: test_crc32_ieee ...passed 00:06:19.615 00:06:19.615 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.615 suites 1 1 n/a 0 0 00:06:19.615 tests 1 1 1 0 0 00:06:19.615 asserts 1 1 1 0 n/a 00:06:19.615 00:06:19.615 Elapsed time = 0.000 seconds 00:06:19.615 07:06:53 -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:06:19.615 00:06:19.615 00:06:19.616 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.616 http://cunit.sourceforge.net/ 00:06:19.616 00:06:19.616 00:06:19.616 Suite: crc32c 00:06:19.616 Test: test_crc32c ...passed 00:06:19.616 Test: test_crc32c_nvme ...passed 00:06:19.616 00:06:19.616 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.616 suites 1 1 n/a 0 0 00:06:19.616 tests 2 2 2 0 0 00:06:19.616 asserts 16 16 16 0 n/a 00:06:19.616 00:06:19.616 Elapsed time = 0.001 seconds 00:06:19.616 07:06:53 -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:06:19.616 00:06:19.616 00:06:19.616 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.616 http://cunit.sourceforge.net/ 00:06:19.616 00:06:19.616 00:06:19.616 Suite: crc64 00:06:19.616 Test: test_crc64_nvme ...passed 00:06:19.616 00:06:19.616 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.616 suites 1 1 n/a 0 0 00:06:19.616 tests 1 1 1 0 0 00:06:19.616 asserts 4 4 4 0 n/a 00:06:19.616 00:06:19.616 Elapsed time = 0.000 seconds 00:06:19.616 07:06:53 -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:06:19.616 00:06:19.616 00:06:19.616 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.616 http://cunit.sourceforge.net/ 00:06:19.616 00:06:19.616 00:06:19.616 Suite: string 00:06:19.616 Test: test_parse_ip_addr ...passed 00:06:19.616 Test: test_str_chomp ...passed 00:06:19.616 Test: test_parse_capacity ...passed 00:06:19.616 Test: test_sprintf_append_realloc ...passed 00:06:19.616 Test: test_strtol ...passed 00:06:19.616 Test: test_strtoll ...passed 00:06:19.616 Test: test_strarray ...passed 00:06:19.616 Test: test_strcpy_replace ...passed 00:06:19.616 00:06:19.616 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.616 suites 1 1 n/a 0 0 00:06:19.616 tests 8 8 8 0 0 00:06:19.616 asserts 161 161 161 0 n/a 00:06:19.616 00:06:19.616 Elapsed time = 0.001 seconds 00:06:19.616 07:06:53 -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:06:19.616 00:06:19.616 00:06:19.616 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.616 http://cunit.sourceforge.net/ 00:06:19.616 00:06:19.616 00:06:19.616 Suite: dif 00:06:19.877 Test: dif_generate_and_verify_test ...[2024-02-13 07:06:53.305585] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:19.877 [2024-02-13 07:06:53.306292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:19.877 [2024-02-13 07:06:53.306663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:06:19.877 [2024-02-13 07:06:53.306959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:19.877 [2024-02-13 07:06:53.307228] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:19.877 passed 00:06:19.877 Test: dif_disable_check_test ...[2024-02-13 07:06:53.307508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:06:19.877 [2024-02-13 07:06:53.308499] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:19.877 [2024-02-13 07:06:53.308836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:19.877 [2024-02-13 07:06:53.309113] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:06:19.877 passed 00:06:19.877 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-02-13 07:06:53.310137] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:06:19.877 [2024-02-13 07:06:53.310446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:06:19.877 [2024-02-13 07:06:53.310736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:06:19.878 [2024-02-13 07:06:53.311056] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:06:19.878 [2024-02-13 07:06:53.311365] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:19.878 [2024-02-13 07:06:53.311655] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:19.878 [2024-02-13 07:06:53.311946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:19.878 [2024-02-13 07:06:53.312223] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:06:19.878 [2024-02-13 07:06:53.312507] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:19.878 [2024-02-13 07:06:53.312805] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:19.878 [2024-02-13 07:06:53.313129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:06:19.878 passed 00:06:19.878 Test: dif_apptag_mask_test ...[2024-02-13 07:06:53.313436] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:19.878 passed 00:06:19.878 Test: dif_sec_512_md_0_error_test ...passed 00:06:19.878 Test: dif_sec_4096_md_0_error_test ...passed 00:06:19.878 Test: dif_sec_4100_md_128_error_test ...passed[2024-02-13 07:06:53.313721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:06:19.878 [2024-02-13 07:06:53.313895] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:19.878 [2024-02-13 07:06:53.313920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:19.878 [2024-02-13 07:06:53.313948] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:19.878 [2024-02-13 07:06:53.314012] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:19.878 [2024-02-13 07:06:53.314048] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 497:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:06:19.878 00:06:19.878 Test: dif_guard_seed_test ...passed 00:06:19.878 Test: dif_guard_value_test ...passed 00:06:19.878 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:06:19.878 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:06:19.878 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:19.878 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:19.878 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:19.878 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:06:19.878 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:19.878 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:06:19.878 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:06:19.878 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:19.878 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:06:19.878 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:06:19.878 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:19.878 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:06:19.878 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:19.878 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:06:19.878 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:19.878 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:19.878 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-02-13 07:06:53.358705] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4d, Actual=fd4c 00:06:19.878 [2024-02-13 07:06:53.361159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fe20, Actual=fe21 00:06:19.878 [2024-02-13 07:06:53.363621] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:19.878 [2024-02-13 07:06:53.366072] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:19.878 [2024-02-13 07:06:53.368537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=5d 00:06:19.878 [2024-02-13 07:06:53.371005] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=5d 00:06:19.878 [2024-02-13 07:06:53.373455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4c, Actual=4b33 00:06:19.878 [2024-02-13 07:06:53.375198] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fe21, Actual=1707 00:06:19.878 [2024-02-13 07:06:53.376938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ec, Actual=1ab753ed 00:06:19.878 [2024-02-13 07:06:53.379388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=38574661, Actual=38574660 00:06:19.878 [2024-02-13 07:06:53.381835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:19.878 [2024-02-13 07:06:53.384254] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:19.878 [2024-02-13 07:06:53.386720] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=5d 00:06:19.878 [2024-02-13 07:06:53.389156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=5d 00:06:19.878 [2024-02-13 07:06:53.391591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ed, Actual=b1624369 00:06:19.878 [2024-02-13 07:06:53.393356] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=38574660, Actual=7ecad326 00:06:19.878 [2024-02-13 07:06:53.395109] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:06:19.878 [2024-02-13 07:06:53.397547] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=88010a2d4837a267, Actual=88010a2d4837a266 00:06:19.878 [2024-02-13 07:06:53.399964] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:19.878 [2024-02-13 07:06:53.402400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:19.878 [2024-02-13 07:06:53.404826] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000005c 00:06:19.878 [2024-02-13 07:06:53.407294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000005c 00:06:19.878 [2024-02-13 07:06:53.409765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d3, Actual=8b333c5e42c7c4e 00:06:19.878 [2024-02-13 07:06:53.411512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=88010a2d4837a266, Actual=4290ab8be9a0304a 00:06:19.878 passed 00:06:19.878 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-02-13 07:06:53.412356] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:06:19.878 [2024-02-13 07:06:53.412649] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:06:19.878 [2024-02-13 07:06:53.412919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.878 [2024-02-13 07:06:53.413209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.878 [2024-02-13 07:06:53.413508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.878 [2024-02-13 07:06:53.413778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.878 [2024-02-13 07:06:53.414051] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=4b33 00:06:19.878 [2024-02-13 07:06:53.414304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=1707 00:06:19.878 [2024-02-13 07:06:53.414554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ec, Actual=1ab753ed 00:06:19.878 [2024-02-13 07:06:53.414835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574661, Actual=38574660 00:06:19.878 [2024-02-13 07:06:53.415124] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.878 [2024-02-13 07:06:53.415414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.878 [2024-02-13 07:06:53.415690] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.878 [2024-02-13 07:06:53.415959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.878 [2024-02-13 07:06:53.416234] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=b1624369 00:06:19.878 [2024-02-13 07:06:53.416469] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=7ecad326 00:06:19.878 [2024-02-13 07:06:53.416731] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:06:19.878 [2024-02-13 07:06:53.417003] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a267, Actual=88010a2d4837a266 00:06:19.878 [2024-02-13 07:06:53.417308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.878 [2024-02-13 07:06:53.417583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.878 [2024-02-13 07:06:53.417864] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:06:19.878 [2024-02-13 07:06:53.418150] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:06:19.878 [2024-02-13 07:06:53.418465] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=8b333c5e42c7c4e 00:06:19.878 passed 00:06:19.878 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-02-13 07:06:53.418726] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=4290ab8be9a0304a 00:06:19.878 [2024-02-13 07:06:53.419010] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:06:19.878 [2024-02-13 07:06:53.419292] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:06:19.879 [2024-02-13 07:06:53.419568] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.419853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.420153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.879 [2024-02-13 07:06:53.420453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.879 [2024-02-13 07:06:53.420739] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=4b33 00:06:19.879 [2024-02-13 07:06:53.420989] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=1707 00:06:19.879 [2024-02-13 07:06:53.421254] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ec, Actual=1ab753ed 00:06:19.879 [2024-02-13 07:06:53.421549] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574661, Actual=38574660 00:06:19.879 [2024-02-13 07:06:53.421829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.422110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.422397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.879 [2024-02-13 07:06:53.422679] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.879 [2024-02-13 07:06:53.422981] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=b1624369 00:06:19.879 [2024-02-13 07:06:53.423249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=7ecad326 00:06:19.879 [2024-02-13 07:06:53.423518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:06:19.879 [2024-02-13 07:06:53.423793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a267, Actual=88010a2d4837a266 00:06:19.879 [2024-02-13 07:06:53.424076] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.424372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.424662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:06:19.879 [2024-02-13 07:06:53.424941] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:06:19.879 [2024-02-13 07:06:53.425258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=8b333c5e42c7c4e 00:06:19.879 [2024-02-13 07:06:53.425508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=4290ab8be9a0304a 00:06:19.879 passed 00:06:19.879 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-02-13 07:06:53.425794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:06:19.879 [2024-02-13 07:06:53.426089] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:06:19.879 [2024-02-13 07:06:53.426382] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.426667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.426987] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.879 [2024-02-13 07:06:53.427266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.879 [2024-02-13 07:06:53.427552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=4b33 00:06:19.879 [2024-02-13 07:06:53.427796] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=1707 00:06:19.879 [2024-02-13 07:06:53.428050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ec, Actual=1ab753ed 00:06:19.879 [2024-02-13 07:06:53.428324] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574661, Actual=38574660 00:06:19.879 [2024-02-13 07:06:53.428622] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.428906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.429201] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.879 [2024-02-13 07:06:53.429486] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.879 [2024-02-13 07:06:53.429769] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=b1624369 00:06:19.879 [2024-02-13 07:06:53.430031] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=7ecad326 00:06:19.879 [2024-02-13 07:06:53.430308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:06:19.879 [2024-02-13 07:06:53.430594] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a267, Actual=88010a2d4837a266 00:06:19.879 [2024-02-13 07:06:53.430886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.431171] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.431452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:06:19.879 [2024-02-13 07:06:53.431736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:06:19.879 [2024-02-13 07:06:53.432032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=8b333c5e42c7c4e 00:06:19.879 passed 00:06:19.879 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-02-13 07:06:53.432287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=4290ab8be9a0304a 00:06:19.879 [2024-02-13 07:06:53.432572] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:06:19.879 [2024-02-13 07:06:53.432848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:06:19.879 [2024-02-13 07:06:53.433158] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.433459] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.433758] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.879 [2024-02-13 07:06:53.434039] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.879 [2024-02-13 07:06:53.434337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=4b33 00:06:19.879 [2024-02-13 07:06:53.434606] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=1707 00:06:19.879 passed 00:06:19.879 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-02-13 07:06:53.434909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ec, Actual=1ab753ed 00:06:19.879 [2024-02-13 07:06:53.435202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574661, Actual=38574660 00:06:19.879 [2024-02-13 07:06:53.435511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.435790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.436074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.879 [2024-02-13 07:06:53.436348] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.879 [2024-02-13 07:06:53.436632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=b1624369 00:06:19.879 [2024-02-13 07:06:53.436880] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=7ecad326 00:06:19.879 [2024-02-13 07:06:53.437185] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:06:19.879 [2024-02-13 07:06:53.437476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a267, Actual=88010a2d4837a266 00:06:19.879 [2024-02-13 07:06:53.437755] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.438038] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.438332] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:06:19.879 [2024-02-13 07:06:53.438616] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:06:19.879 [2024-02-13 07:06:53.438914] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=8b333c5e42c7c4e 00:06:19.879 passed 00:06:19.879 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-02-13 07:06:53.439180] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=4290ab8be9a0304a 00:06:19.879 [2024-02-13 07:06:53.439455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:06:19.879 [2024-02-13 07:06:53.439744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe20, Actual=fe21 00:06:19.879 [2024-02-13 07:06:53.440019] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.440302] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.879 [2024-02-13 07:06:53.440604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.879 [2024-02-13 07:06:53.440886] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.880 [2024-02-13 07:06:53.441182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=4b33 00:06:19.880 [2024-02-13 07:06:53.441429] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=1707 00:06:19.880 passed 00:06:19.880 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-02-13 07:06:53.441718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ec, Actual=1ab753ed 00:06:19.880 [2024-02-13 07:06:53.442004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574661, Actual=38574660 00:06:19.880 [2024-02-13 07:06:53.442323] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.880 [2024-02-13 07:06:53.442611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.880 [2024-02-13 07:06:53.442896] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.880 [2024-02-13 07:06:53.443188] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.880 [2024-02-13 07:06:53.443475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=b1624369 00:06:19.880 [2024-02-13 07:06:53.443720] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=7ecad326 00:06:19.880 [2024-02-13 07:06:53.444027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:06:19.880 [2024-02-13 07:06:53.444309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a267, Actual=88010a2d4837a266 00:06:19.880 [2024-02-13 07:06:53.444596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.880 [2024-02-13 07:06:53.444878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.880 [2024-02-13 07:06:53.445183] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:06:19.880 [2024-02-13 07:06:53.445465] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:06:19.880 [2024-02-13 07:06:53.445769] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=8b333c5e42c7c4e 00:06:19.880 [2024-02-13 07:06:53.446025] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=4290ab8be9a0304a 00:06:19.880 passed 00:06:19.880 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:06:19.880 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:19.880 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:19.880 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:19.880 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:19.880 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:19.880 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:19.880 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:19.880 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:19.880 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-02-13 07:06:53.490261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4d, Actual=fd4c 00:06:19.880 [2024-02-13 07:06:53.491393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=f3dc, Actual=f3dd 00:06:19.880 [2024-02-13 07:06:53.492488] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:19.880 [2024-02-13 07:06:53.493591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:19.880 [2024-02-13 07:06:53.494703] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=5d 00:06:19.880 [2024-02-13 07:06:53.495792] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=5d 00:06:19.880 [2024-02-13 07:06:53.496889] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4c, Actual=4b33 00:06:19.880 [2024-02-13 07:06:53.497985] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=4e97, Actual=a7b1 00:06:19.880 [2024-02-13 07:06:53.499102] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ec, Actual=1ab753ed 00:06:19.880 [2024-02-13 07:06:53.500207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=5715212e, Actual=5715212f 00:06:19.880 [2024-02-13 07:06:53.501310] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:19.880 [2024-02-13 07:06:53.502445] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:19.880 [2024-02-13 07:06:53.503537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=5d 00:06:19.880 [2024-02-13 07:06:53.504628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=5d 00:06:19.880 [2024-02-13 07:06:53.505721] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ed, Actual=b1624369 00:06:19.880 [2024-02-13 07:06:53.506833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=2b267559, Actual=6dbbe01f 00:06:19.880 [2024-02-13 07:06:53.507916] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:06:19.880 [2024-02-13 07:06:53.509027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=b123dba2f5d4cfec, Actual=b123dba2f5d4cfed 00:06:19.880 [2024-02-13 07:06:53.510133] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:19.880 [2024-02-13 07:06:53.511240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:19.880 [2024-02-13 07:06:53.512321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000005c 00:06:19.880 [2024-02-13 07:06:53.513434] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000005c 00:06:19.880 [2024-02-13 07:06:53.514551] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d3, Actual=8b333c5e42c7c4e 00:06:19.880 [2024-02-13 07:06:53.515683] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=cd8425887035b4fd, Actual=715842ed1a226d1 00:06:19.880 passed 00:06:19.880 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-02-13 07:06:53.516019] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:06:19.880 [2024-02-13 07:06:53.516274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=905d, Actual=905c 00:06:19.880 [2024-02-13 07:06:53.516523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.880 [2024-02-13 07:06:53.516767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.880 [2024-02-13 07:06:53.517041] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.880 [2024-02-13 07:06:53.517338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.880 [2024-02-13 07:06:53.517588] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=4b33 00:06:19.880 [2024-02-13 07:06:53.517839] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=c430 00:06:19.880 [2024-02-13 07:06:53.518081] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ec, Actual=1ab753ed 00:06:19.880 [2024-02-13 07:06:53.518354] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=969514db, Actual=969514da 00:06:19.880 [2024-02-13 07:06:53.518617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.880 [2024-02-13 07:06:53.518869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.880 [2024-02-13 07:06:53.519121] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.880 [2024-02-13 07:06:53.519376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:19.880 [2024-02-13 07:06:53.519617] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=b1624369 00:06:19.880 [2024-02-13 07:06:53.519874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=ac3bd5ea 00:06:19.880 [2024-02-13 07:06:53.520141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:06:19.880 [2024-02-13 07:06:53.520381] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=44c1d431d33b4bb3, Actual=44c1d431d33b4bb2 00:06:19.880 [2024-02-13 07:06:53.520641] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.880 [2024-02-13 07:06:53.520882] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:19.880 [2024-02-13 07:06:53.521151] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:06:19.880 [2024-02-13 07:06:53.521395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:06:19.880 [2024-02-13 07:06:53.521656] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=8b333c5e42c7c4e 00:06:19.880 passed 00:06:19.880 Test: dix_sec_512_md_0_error ...passed 00:06:19.880 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-02-13 07:06:53.521918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=f2f78bbdf74da28e 00:06:19.880 [2024-02-13 07:06:53.521977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 479:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:06:19.880 passed 00:06:19.880 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:06:19.880 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:06:19.880 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:06:19.880 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:06:19.880 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:06:19.881 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:06:19.881 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:06:20.140 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:06:20.141 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-02-13 07:06:53.565950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4d, Actual=fd4c 00:06:20.141 [2024-02-13 07:06:53.567113] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=f3dc, Actual=f3dd 00:06:20.141 [2024-02-13 07:06:53.568229] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:20.141 [2024-02-13 07:06:53.569336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:20.141 [2024-02-13 07:06:53.570475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=5d 00:06:20.141 [2024-02-13 07:06:53.571588] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=5d 00:06:20.141 [2024-02-13 07:06:53.572682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=fd4c, Actual=4b33 00:06:20.141 [2024-02-13 07:06:53.573797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=4e97, Actual=a7b1 00:06:20.141 [2024-02-13 07:06:53.574979] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ec, Actual=1ab753ed 00:06:20.141 [2024-02-13 07:06:53.576155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=5715212e, Actual=5715212f 00:06:20.141 [2024-02-13 07:06:53.577302] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:20.141 [2024-02-13 07:06:53.578399] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:20.141 [2024-02-13 07:06:53.579480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=5d 00:06:20.141 [2024-02-13 07:06:53.580572] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=5d 00:06:20.141 [2024-02-13 07:06:53.581675] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=1ab753ed, Actual=b1624369 00:06:20.141 [2024-02-13 07:06:53.582768] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=2b267559, Actual=6dbbe01f 00:06:20.141 [2024-02-13 07:06:53.583881] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:06:20.141 [2024-02-13 07:06:53.584967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=b123dba2f5d4cfec, Actual=b123dba2f5d4cfed 00:06:20.141 [2024-02-13 07:06:53.586070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:20.141 [2024-02-13 07:06:53.587156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92, Expected=88, Actual=89 00:06:20.141 [2024-02-13 07:06:53.588247] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000005c 00:06:20.141 [2024-02-13 07:06:53.589333] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000005c 00:06:20.141 [2024-02-13 07:06:53.590442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=a576a7728ecc20d3, Actual=8b333c5e42c7c4e 00:06:20.141 passed 00:06:20.141 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-02-13 07:06:53.591519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92, Expected=cd8425887035b4fd, Actual=715842ed1a226d1 00:06:20.141 [2024-02-13 07:06:53.591893] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4d, Actual=fd4c 00:06:20.141 [2024-02-13 07:06:53.592145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=905d, Actual=905c 00:06:20.141 [2024-02-13 07:06:53.592392] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:20.141 [2024-02-13 07:06:53.592656] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:20.141 [2024-02-13 07:06:53.592921] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:20.141 [2024-02-13 07:06:53.593188] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:20.141 [2024-02-13 07:06:53.593437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=4b33 00:06:20.141 [2024-02-13 07:06:53.593677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=c430 00:06:20.141 [2024-02-13 07:06:53.593932] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ec, Actual=1ab753ed 00:06:20.141 [2024-02-13 07:06:53.594177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=969514db, Actual=969514da 00:06:20.141 [2024-02-13 07:06:53.594452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:20.141 [2024-02-13 07:06:53.594711] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:20.141 [2024-02-13 07:06:53.594950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:20.141 [2024-02-13 07:06:53.595195] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=59 00:06:20.141 [2024-02-13 07:06:53.595436] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=b1624369 00:06:20.141 [2024-02-13 07:06:53.595685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=ac3bd5ea 00:06:20.141 [2024-02-13 07:06:53.595938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d2, Actual=a576a7728ecc20d3 00:06:20.141 [2024-02-13 07:06:53.596192] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=44c1d431d33b4bb3, Actual=44c1d431d33b4bb2 00:06:20.141 [2024-02-13 07:06:53.596435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:20.141 [2024-02-13 07:06:53.596680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=89 00:06:20.141 [2024-02-13 07:06:53.596919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:06:20.141 [2024-02-13 07:06:53.597181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058 00:06:20.141 [2024-02-13 07:06:53.597438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=8b333c5e42c7c4e 00:06:20.141 passed 00:06:20.141 Test: set_md_interleave_iovs_test ...[2024-02-13 07:06:53.597682] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=f2f78bbdf74da28e 00:06:20.141 passed 00:06:20.141 Test: set_md_interleave_iovs_split_test ...passed 00:06:20.141 Test: dif_generate_stream_pi_16_test ...passed 00:06:20.141 Test: dif_generate_stream_test ...passed 00:06:20.141 Test: set_md_interleave_iovs_alignment_test ...passed 00:06:20.141 Test: dif_generate_split_test ...[2024-02-13 07:06:53.605128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1799:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:06:20.141 passed 00:06:20.141 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:06:20.141 Test: dif_verify_split_test ...passed 00:06:20.141 Test: dif_verify_stream_multi_segments_test ...passed 00:06:20.141 Test: update_crc32c_pi_16_test ...passed 00:06:20.141 Test: update_crc32c_test ...passed 00:06:20.141 Test: dif_update_crc32c_split_test ...passed 00:06:20.141 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:06:20.141 Test: get_range_with_md_test ...passed 00:06:20.141 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:06:20.141 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:06:20.141 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:20.141 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:06:20.141 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:06:20.141 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:06:20.141 Test: dif_generate_and_verify_unmap_test ...passed 00:06:20.141 00:06:20.141 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.141 suites 1 1 n/a 0 0 00:06:20.141 tests 79 79 79 0 0 00:06:20.141 asserts 3584 3584 3584 0 n/a 00:06:20.141 00:06:20.141 Elapsed time = 0.346 seconds 00:06:20.141 07:06:53 -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:06:20.141 00:06:20.141 00:06:20.141 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.141 http://cunit.sourceforge.net/ 00:06:20.141 00:06:20.141 00:06:20.141 Suite: iov 00:06:20.141 Test: test_single_iov ...passed 00:06:20.141 Test: test_simple_iov ...passed 00:06:20.141 Test: test_complex_iov ...passed 00:06:20.141 Test: test_iovs_to_buf ...passed 00:06:20.141 Test: test_buf_to_iovs ...passed 00:06:20.141 Test: test_memset ...passed 00:06:20.141 Test: test_iov_one ...passed 00:06:20.141 Test: test_iov_xfer ...passed 00:06:20.141 00:06:20.141 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.141 suites 1 1 n/a 0 0 00:06:20.141 tests 8 8 8 0 0 00:06:20.141 asserts 156 156 156 0 n/a 00:06:20.141 00:06:20.141 Elapsed time = 0.000 seconds 00:06:20.141 07:06:53 -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:06:20.141 00:06:20.141 00:06:20.141 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.141 http://cunit.sourceforge.net/ 00:06:20.141 00:06:20.141 00:06:20.141 Suite: math 00:06:20.141 Test: test_serial_number_arithmetic ...passed 00:06:20.141 Suite: erase 00:06:20.141 Test: test_memset_s ...passed 00:06:20.141 00:06:20.141 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.141 suites 2 2 n/a 0 0 00:06:20.141 tests 2 2 2 0 0 00:06:20.141 asserts 18 18 18 0 n/a 00:06:20.141 00:06:20.141 Elapsed time = 0.000 seconds 00:06:20.141 07:06:53 -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:06:20.141 00:06:20.141 00:06:20.142 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.142 http://cunit.sourceforge.net/ 00:06:20.142 00:06:20.142 00:06:20.142 Suite: pipe 00:06:20.142 Test: test_create_destroy ...passed 00:06:20.142 Test: test_write_get_buffer ...passed 00:06:20.142 Test: test_write_advance ...passed 00:06:20.142 Test: test_read_get_buffer ...passed 00:06:20.142 Test: test_read_advance ...passed 00:06:20.142 Test: test_data ...passed 00:06:20.142 00:06:20.142 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.142 suites 1 1 n/a 0 0 00:06:20.142 tests 6 6 6 0 0 00:06:20.142 asserts 251 251 251 0 n/a 00:06:20.142 00:06:20.142 Elapsed time = 0.000 seconds 00:06:20.142 07:06:53 -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:06:20.142 00:06:20.142 00:06:20.142 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.142 http://cunit.sourceforge.net/ 00:06:20.142 00:06:20.142 00:06:20.142 Suite: xor 00:06:20.142 Test: test_xor_gen ...passed 00:06:20.142 00:06:20.142 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.142 suites 1 1 n/a 0 0 00:06:20.142 tests 1 1 1 0 0 00:06:20.142 asserts 17 17 17 0 n/a 00:06:20.142 00:06:20.142 Elapsed time = 0.007 seconds 00:06:20.142 00:06:20.142 real 0m0.755s 00:06:20.142 user 0m0.589s 00:06:20.142 sys 0m0.171s 00:06:20.142 07:06:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.142 ************************************ 00:06:20.142 END TEST unittest_util 00:06:20.142 ************************************ 00:06:20.142 07:06:53 -- common/autotest_common.sh@10 -- # set +x 00:06:20.401 07:06:53 -- unit/unittest.sh@282 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:06:20.401 07:06:53 -- unit/unittest.sh@283 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:20.401 07:06:53 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:20.401 07:06:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:20.401 07:06:53 -- common/autotest_common.sh@10 -- # set +x 00:06:20.401 ************************************ 00:06:20.401 START TEST unittest_vhost 00:06:20.401 ************************************ 00:06:20.401 07:06:53 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut 00:06:20.401 00:06:20.401 00:06:20.401 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.401 http://cunit.sourceforge.net/ 00:06:20.401 00:06:20.401 00:06:20.401 Suite: vhost_suite 00:06:20.401 Test: desc_to_iov_test ...[2024-02-13 07:06:53.867077] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 647:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached 00:06:20.401 passed 00:06:20.401 Test: create_controller_test ...[2024-02-13 07:06:53.871544] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:20.401 [2024-02-13 07:06:53.871669] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf) 00:06:20.401 [2024-02-13 07:06:53.871787] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 80:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f) 00:06:20.401 [2024-02-13 07:06:53.871865] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 126:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf) 00:06:20.401 [2024-02-13 07:06:53.871910] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 121:vhost_dev_register: *ERROR*: Can't register controller with no name 00:06:20.401 [2024-02-13 07:06:53.872008] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1798:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx[2024-02-13 07:06:53.872984] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 133:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists. 00:06:20.401 passed 00:06:20.401 Test: session_find_by_vid_test ...passed 00:06:20.401 Test: remove_controller_test ...passed 00:06:20.401 Test: vq_avail_ring_get_test ...passed 00:06:20.401 Test: vq_packed_ring_test ...passed 00:06:20.401 Test: vhost_blk_construct_test ...[2024-02-13 07:06:53.875077] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1883:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection. 00:06:20.401 passed 00:06:20.401 00:06:20.401 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.401 suites 1 1 n/a 0 0 00:06:20.401 tests 7 7 7 0 0 00:06:20.401 asserts 145 145 145 0 n/a 00:06:20.401 00:06:20.401 Elapsed time = 0.012 seconds 00:06:20.401 00:06:20.401 real 0m0.045s 00:06:20.401 user 0m0.021s 00:06:20.401 sys 0m0.025s 00:06:20.401 ************************************ 00:06:20.401 END TEST unittest_vhost 00:06:20.401 ************************************ 00:06:20.401 07:06:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.401 07:06:53 -- common/autotest_common.sh@10 -- # set +x 00:06:20.401 07:06:53 -- unit/unittest.sh@285 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:20.401 07:06:53 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:20.401 07:06:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:20.401 07:06:53 -- common/autotest_common.sh@10 -- # set +x 00:06:20.401 ************************************ 00:06:20.401 START TEST unittest_dma 00:06:20.401 ************************************ 00:06:20.401 07:06:53 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:06:20.401 00:06:20.401 00:06:20.401 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.401 http://cunit.sourceforge.net/ 00:06:20.401 00:06:20.401 00:06:20.401 Suite: dma_suite 00:06:20.401 Test: test_dma ...passed 00:06:20.401 00:06:20.401 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.401 suites 1 1 n/a 0 0 00:06:20.401 tests 1 1 1 0 0 00:06:20.401 asserts 50 50 50 0 n/a 00:06:20.401 00:06:20.401 Elapsed time = 0.000 seconds 00:06:20.401 [2024-02-13 07:06:53.960594] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 37:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:06:20.401 00:06:20.401 real 0m0.033s 00:06:20.401 user 0m0.022s 00:06:20.401 sys 0m0.011s 00:06:20.401 07:06:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.401 ************************************ 00:06:20.401 END TEST unittest_dma 00:06:20.401 ************************************ 00:06:20.401 07:06:53 -- common/autotest_common.sh@10 -- # set +x 00:06:20.401 07:06:54 -- unit/unittest.sh@287 -- # run_test unittest_init unittest_init 00:06:20.402 07:06:54 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:06:20.402 07:06:54 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:06:20.402 07:06:54 -- common/autotest_common.sh@10 -- # set +x 00:06:20.402 ************************************ 00:06:20.402 START TEST unittest_init 00:06:20.402 ************************************ 00:06:20.402 07:06:54 -- common/autotest_common.sh@1102 -- # unittest_init 00:06:20.402 07:06:54 -- unit/unittest.sh@148 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:06:20.402 00:06:20.402 00:06:20.402 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.402 http://cunit.sourceforge.net/ 00:06:20.402 00:06:20.402 00:06:20.402 Suite: subsystem_suite 00:06:20.402 Test: subsystem_sort_test_depends_on_single ...passed 00:06:20.402 Test: subsystem_sort_test_depends_on_multiple ...passed 00:06:20.402 Test: subsystem_sort_test_missing_dependency ...[2024-02-13 07:06:54.049943] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 190:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:06:20.402 passed 00:06:20.402 00:06:20.402 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.402 suites 1 1 n/a 0 0 00:06:20.402 tests 3 3 3 0 0 00:06:20.402 asserts 20 20 20 0 n/a 00:06:20.402 00:06:20.402 Elapsed time = 0.001 seconds 00:06:20.402 [2024-02-13 07:06:54.050337] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 185:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:06:20.402 00:06:20.402 real 0m0.039s 00:06:20.402 user 0m0.018s 00:06:20.402 sys 0m0.021s 00:06:20.402 07:06:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.402 ************************************ 00:06:20.402 END TEST unittest_init 00:06:20.402 ************************************ 00:06:20.402 07:06:54 -- common/autotest_common.sh@10 -- # set +x 00:06:20.661 07:06:54 -- unit/unittest.sh@289 -- # '[' yes = yes ']' 00:06:20.661 07:06:54 -- unit/unittest.sh@289 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:06:20.661 07:06:54 -- unit/unittest.sh@290 -- # hostname 00:06:20.661 07:06:54 -- unit/unittest.sh@290 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -d . -c -t ubuntu2004-cloud-1678329680-1737 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:06:20.661 geninfo: WARNING: invalid characters removed from testname! 00:06:27.226 07:07:00 -- unit/unittest.sh@291 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info 00:06:32.495 07:07:05 -- unit/unittest.sh@292 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:34.398 07:07:07 -- unit/unittest.sh@293 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:36.931 07:07:10 -- unit/unittest.sh@294 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:40.248 07:07:13 -- unit/unittest.sh@295 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:42.781 07:07:16 -- unit/unittest.sh@296 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:45.316 07:07:18 -- unit/unittest.sh@297 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:47.849 07:07:21 -- unit/unittest.sh@298 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info 00:06:47.849 07:07:21 -- unit/unittest.sh@299 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:06:48.415 Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info 00:06:48.415 Found 309 entries. 00:06:48.415 Found common filename prefix "/home/vagrant/spdk_repo/spdk" 00:06:48.415 Writing .css and .png files. 00:06:48.415 Generating output. 00:06:48.676 Processing file include/linux/virtio_ring.h 00:06:48.935 Processing file include/spdk/mmio.h 00:06:48.935 Processing file include/spdk/nvmf_transport.h 00:06:48.935 Processing file include/spdk/histogram_data.h 00:06:48.935 Processing file include/spdk/endian.h 00:06:48.935 Processing file include/spdk/base64.h 00:06:48.936 Processing file include/spdk/trace.h 00:06:48.936 Processing file include/spdk/thread.h 00:06:48.936 Processing file include/spdk/bdev_module.h 00:06:48.936 Processing file include/spdk/nvme_spec.h 00:06:48.936 Processing file include/spdk/util.h 00:06:48.936 Processing file include/spdk/nvme.h 00:06:49.193 Processing file include/spdk_internal/nvme_tcp.h 00:06:49.193 Processing file include/spdk_internal/sock.h 00:06:49.193 Processing file include/spdk_internal/utf.h 00:06:49.193 Processing file include/spdk_internal/rdma.h 00:06:49.193 Processing file include/spdk_internal/sgl.h 00:06:49.193 Processing file include/spdk_internal/virtio.h 00:06:49.193 Processing file lib/accel/accel_rpc.c 00:06:49.193 Processing file lib/accel/accel_sw.c 00:06:49.193 Processing file lib/accel/accel.c 00:06:49.760 Processing file lib/bdev/bdev_zone.c 00:06:49.760 Processing file lib/bdev/scsi_nvme.c 00:06:49.760 Processing file lib/bdev/part.c 00:06:49.760 Processing file lib/bdev/bdev_rpc.c 00:06:49.760 Processing file lib/bdev/bdev.c 00:06:50.019 Processing file lib/blob/request.c 00:06:50.019 Processing file lib/blob/blobstore.h 00:06:50.019 Processing file lib/blob/blobstore.c 00:06:50.019 Processing file lib/blob/zeroes.c 00:06:50.019 Processing file lib/blob/blob_bs_dev.c 00:06:50.019 Processing file lib/blobfs/blobfs.c 00:06:50.019 Processing file lib/blobfs/tree.c 00:06:50.019 Processing file lib/conf/conf.c 00:06:50.276 Processing file lib/dma/dma.c 00:06:50.534 Processing file lib/env_dpdk/pci_virtio.c 00:06:50.534 Processing file lib/env_dpdk/pci_dpdk_2207.c 00:06:50.534 Processing file lib/env_dpdk/init.c 00:06:50.534 Processing file lib/env_dpdk/pci_idxd.c 00:06:50.534 Processing file lib/env_dpdk/pci_ioat.c 00:06:50.534 Processing file lib/env_dpdk/sigbus_handler.c 00:06:50.534 Processing file lib/env_dpdk/pci.c 00:06:50.534 Processing file lib/env_dpdk/memory.c 00:06:50.534 Processing file lib/env_dpdk/env.c 00:06:50.534 Processing file lib/env_dpdk/pci_dpdk_2211.c 00:06:50.534 Processing file lib/env_dpdk/pci_event.c 00:06:50.534 Processing file lib/env_dpdk/pci_vmd.c 00:06:50.534 Processing file lib/env_dpdk/threads.c 00:06:50.534 Processing file lib/env_dpdk/pci_dpdk.c 00:06:50.534 Processing file lib/event/scheduler_static.c 00:06:50.534 Processing file lib/event/log_rpc.c 00:06:50.534 Processing file lib/event/reactor.c 00:06:50.534 Processing file lib/event/app.c 00:06:50.534 Processing file lib/event/app_rpc.c 00:06:51.101 Processing file lib/ftl/ftl_debug.c 00:06:51.101 Processing file lib/ftl/ftl_reloc.c 00:06:51.101 Processing file lib/ftl/ftl_writer.h 00:06:51.101 Processing file lib/ftl/ftl_band_ops.c 00:06:51.101 Processing file lib/ftl/ftl_l2p_cache.c 00:06:51.101 Processing file lib/ftl/ftl_nv_cache.h 00:06:51.101 Processing file lib/ftl/ftl_p2l.c 00:06:51.101 Processing file lib/ftl/ftl_init.c 00:06:51.101 Processing file lib/ftl/ftl_io.c 00:06:51.101 Processing file lib/ftl/ftl_writer.c 00:06:51.101 Processing file lib/ftl/ftl_trace.c 00:06:51.101 Processing file lib/ftl/ftl_band.h 00:06:51.101 Processing file lib/ftl/ftl_nv_cache.c 00:06:51.101 Processing file lib/ftl/ftl_l2p_flat.c 00:06:51.101 Processing file lib/ftl/ftl_layout.c 00:06:51.101 Processing file lib/ftl/ftl_band.c 00:06:51.101 Processing file lib/ftl/ftl_nv_cache_io.h 00:06:51.101 Processing file lib/ftl/ftl_io.h 00:06:51.101 Processing file lib/ftl/ftl_debug.h 00:06:51.101 Processing file lib/ftl/ftl_l2p.c 00:06:51.101 Processing file lib/ftl/ftl_rq.c 00:06:51.101 Processing file lib/ftl/ftl_core.h 00:06:51.101 Processing file lib/ftl/ftl_core.c 00:06:51.101 Processing file lib/ftl/ftl_sb.c 00:06:51.101 Processing file lib/ftl/base/ftl_base_bdev.c 00:06:51.101 Processing file lib/ftl/base/ftl_base_dev.c 00:06:51.363 Processing file lib/ftl/mngt/ftl_mngt_p2l.c 00:06:51.363 Processing file lib/ftl/mngt/ftl_mngt_band.c 00:06:51.363 Processing file lib/ftl/mngt/ftl_mngt_startup.c 00:06:51.363 Processing file lib/ftl/mngt/ftl_mngt_shutdown.c 00:06:51.363 Processing file lib/ftl/mngt/ftl_mngt_upgrade.c 00:06:51.363 Processing file lib/ftl/mngt/ftl_mngt_l2p.c 00:06:51.363 Processing file lib/ftl/mngt/ftl_mngt_recovery.c 00:06:51.363 Processing file lib/ftl/mngt/ftl_mngt_self_test.c 00:06:51.363 Processing file lib/ftl/mngt/ftl_mngt.c 00:06:51.363 Processing file lib/ftl/mngt/ftl_mngt_bdev.c 00:06:51.363 Processing file lib/ftl/mngt/ftl_mngt_misc.c 00:06:51.363 Processing file lib/ftl/mngt/ftl_mngt_ioch.c 00:06:51.363 Processing file lib/ftl/mngt/ftl_mngt_md.c 00:06:51.632 Processing file lib/ftl/nvc/ftl_nvc_dev.c 00:06:51.632 Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c 00:06:51.632 Processing file lib/ftl/upgrade/ftl_sb_v5.c 00:06:51.632 Processing file lib/ftl/upgrade/ftl_layout_upgrade.c 00:06:51.632 Processing file lib/ftl/upgrade/ftl_sb_v3.c 00:06:51.632 Processing file lib/ftl/upgrade/ftl_sb_upgrade.c 00:06:51.891 Processing file lib/ftl/utils/ftl_md.c 00:06:51.891 Processing file lib/ftl/utils/ftl_addr_utils.h 00:06:51.891 Processing file lib/ftl/utils/ftl_property.c 00:06:51.891 Processing file lib/ftl/utils/ftl_mempool.c 00:06:51.891 Processing file lib/ftl/utils/ftl_df.h 00:06:51.891 Processing file lib/ftl/utils/ftl_property.h 00:06:51.891 Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c 00:06:51.891 Processing file lib/ftl/utils/ftl_conf.c 00:06:51.891 Processing file lib/ftl/utils/ftl_bitmap.c 00:06:52.150 Processing file lib/idxd/idxd.c 00:06:52.150 Processing file lib/idxd/idxd_internal.h 00:06:52.150 Processing file lib/idxd/idxd_user.c 00:06:52.150 Processing file lib/init/subsystem_rpc.c 00:06:52.150 Processing file lib/init/json_config.c 00:06:52.150 Processing file lib/init/rpc.c 00:06:52.150 Processing file lib/init/subsystem.c 00:06:52.150 Processing file lib/ioat/ioat.c 00:06:52.150 Processing file lib/ioat/ioat_internal.h 00:06:52.717 Processing file lib/iscsi/portal_grp.c 00:06:52.717 Processing file lib/iscsi/iscsi_rpc.c 00:06:52.717 Processing file lib/iscsi/iscsi_subsystem.c 00:06:52.717 Processing file lib/iscsi/init_grp.c 00:06:52.717 Processing file lib/iscsi/task.c 00:06:52.717 Processing file lib/iscsi/iscsi.h 00:06:52.717 Processing file lib/iscsi/param.c 00:06:52.717 Processing file lib/iscsi/iscsi.c 00:06:52.717 Processing file lib/iscsi/tgt_node.c 00:06:52.717 Processing file lib/iscsi/task.h 00:06:52.717 Processing file lib/iscsi/conn.c 00:06:52.717 Processing file lib/iscsi/md5.c 00:06:52.717 Processing file lib/json/json_util.c 00:06:52.717 Processing file lib/json/json_write.c 00:06:52.717 Processing file lib/json/json_parse.c 00:06:52.975 Processing file lib/jsonrpc/jsonrpc_client_tcp.c 00:06:52.975 Processing file lib/jsonrpc/jsonrpc_client.c 00:06:52.975 Processing file lib/jsonrpc/jsonrpc_server_tcp.c 00:06:52.975 Processing file lib/jsonrpc/jsonrpc_server.c 00:06:52.975 Processing file lib/log/log_deprecated.c 00:06:52.975 Processing file lib/log/log_flags.c 00:06:52.975 Processing file lib/log/log.c 00:06:52.975 Processing file lib/lvol/lvol.c 00:06:53.234 Processing file lib/nbd/nbd_rpc.c 00:06:53.234 Processing file lib/nbd/nbd.c 00:06:53.234 Processing file lib/notify/notify_rpc.c 00:06:53.234 Processing file lib/notify/notify.c 00:06:54.170 Processing file lib/nvme/nvme_opal.c 00:06:54.170 Processing file lib/nvme/nvme_poll_group.c 00:06:54.170 Processing file lib/nvme/nvme_tcp.c 00:06:54.170 Processing file lib/nvme/nvme_discovery.c 00:06:54.170 Processing file lib/nvme/nvme_fabric.c 00:06:54.171 Processing file lib/nvme/nvme_io_msg.c 00:06:54.171 Processing file lib/nvme/nvme_transport.c 00:06:54.171 Processing file lib/nvme/nvme_ctrlr_cmd.c 00:06:54.171 Processing file lib/nvme/nvme.c 00:06:54.171 Processing file lib/nvme/nvme_ctrlr.c 00:06:54.171 Processing file lib/nvme/nvme_qpair.c 00:06:54.171 Processing file lib/nvme/nvme_ns_cmd.c 00:06:54.171 Processing file lib/nvme/nvme_pcie_common.c 00:06:54.171 Processing file lib/nvme/nvme_vfio_user.c 00:06:54.171 Processing file lib/nvme/nvme_rdma.c 00:06:54.171 Processing file lib/nvme/nvme_pcie.c 00:06:54.171 Processing file lib/nvme/nvme_ns_ocssd_cmd.c 00:06:54.171 Processing file lib/nvme/nvme_cuse.c 00:06:54.171 Processing file lib/nvme/nvme_ns.c 00:06:54.171 Processing file lib/nvme/nvme_pcie_internal.h 00:06:54.171 Processing file lib/nvme/nvme_quirks.c 00:06:54.171 Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c 00:06:54.171 Processing file lib/nvme/nvme_internal.h 00:06:54.171 Processing file lib/nvme/nvme_zns.c 00:06:54.738 Processing file lib/nvmf/nvmf_internal.h 00:06:54.738 Processing file lib/nvmf/rdma.c 00:06:54.738 Processing file lib/nvmf/transport.c 00:06:54.738 Processing file lib/nvmf/nvmf.c 00:06:54.738 Processing file lib/nvmf/ctrlr_discovery.c 00:06:54.738 Processing file lib/nvmf/tcp.c 00:06:54.738 Processing file lib/nvmf/subsystem.c 00:06:54.738 Processing file lib/nvmf/ctrlr.c 00:06:54.738 Processing file lib/nvmf/ctrlr_bdev.c 00:06:54.738 Processing file lib/nvmf/nvmf_rpc.c 00:06:54.738 Processing file lib/rdma/rdma_verbs.c 00:06:54.738 Processing file lib/rdma/common.c 00:06:54.997 Processing file lib/rpc/rpc.c 00:06:54.997 Processing file lib/scsi/lun.c 00:06:54.997 Processing file lib/scsi/scsi_rpc.c 00:06:54.997 Processing file lib/scsi/scsi_bdev.c 00:06:54.997 Processing file lib/scsi/dev.c 00:06:54.997 Processing file lib/scsi/task.c 00:06:54.997 Processing file lib/scsi/scsi_pr.c 00:06:54.997 Processing file lib/scsi/scsi.c 00:06:54.997 Processing file lib/scsi/port.c 00:06:55.256 Processing file lib/sock/sock.c 00:06:55.256 Processing file lib/sock/sock_rpc.c 00:06:55.256 Processing file lib/thread/iobuf.c 00:06:55.256 Processing file lib/thread/thread.c 00:06:55.515 Processing file lib/trace/trace_rpc.c 00:06:55.515 Processing file lib/trace/trace_flags.c 00:06:55.515 Processing file lib/trace/trace.c 00:06:55.515 Processing file lib/trace_parser/trace.cpp 00:06:55.515 Processing file lib/ut/ut.c 00:06:55.774 Processing file lib/ut_mock/mock.c 00:06:56.032 Processing file lib/util/base64.c 00:06:56.032 Processing file lib/util/file.c 00:06:56.032 Processing file lib/util/crc16.c 00:06:56.032 Processing file lib/util/bit_array.c 00:06:56.032 Processing file lib/util/crc32.c 00:06:56.032 Processing file lib/util/math.c 00:06:56.032 Processing file lib/util/fd_group.c 00:06:56.032 Processing file lib/util/iov.c 00:06:56.032 Processing file lib/util/uuid.c 00:06:56.032 Processing file lib/util/strerror_tls.c 00:06:56.032 Processing file lib/util/fd.c 00:06:56.032 Processing file lib/util/crc64.c 00:06:56.032 Processing file lib/util/hexlify.c 00:06:56.032 Processing file lib/util/crc32_ieee.c 00:06:56.032 Processing file lib/util/crc32c.c 00:06:56.032 Processing file lib/util/cpuset.c 00:06:56.032 Processing file lib/util/zipf.c 00:06:56.032 Processing file lib/util/string.c 00:06:56.032 Processing file lib/util/xor.c 00:06:56.032 Processing file lib/util/pipe.c 00:06:56.032 Processing file lib/util/dif.c 00:06:56.032 Processing file lib/vfio_user/host/vfio_user.c 00:06:56.032 Processing file lib/vfio_user/host/vfio_user_pci.c 00:06:56.291 Processing file lib/vhost/vhost_rpc.c 00:06:56.291 Processing file lib/vhost/rte_vhost_user.c 00:06:56.291 Processing file lib/vhost/vhost_scsi.c 00:06:56.291 Processing file lib/vhost/vhost_internal.h 00:06:56.291 Processing file lib/vhost/vhost.c 00:06:56.291 Processing file lib/vhost/vhost_blk.c 00:06:56.551 Processing file lib/virtio/virtio_vhost_user.c 00:06:56.551 Processing file lib/virtio/virtio_pci.c 00:06:56.551 Processing file lib/virtio/virtio.c 00:06:56.551 Processing file lib/virtio/virtio_vfio_user.c 00:06:56.551 Processing file lib/vmd/vmd.c 00:06:56.551 Processing file lib/vmd/led.c 00:06:56.551 Processing file module/accel/dsa/accel_dsa.c 00:06:56.551 Processing file module/accel/dsa/accel_dsa_rpc.c 00:06:56.810 Processing file module/accel/error/accel_error.c 00:06:56.810 Processing file module/accel/error/accel_error_rpc.c 00:06:56.810 Processing file module/accel/iaa/accel_iaa.c 00:06:56.810 Processing file module/accel/iaa/accel_iaa_rpc.c 00:06:56.810 Processing file module/accel/ioat/accel_ioat_rpc.c 00:06:56.810 Processing file module/accel/ioat/accel_ioat.c 00:06:57.068 Processing file module/bdev/aio/bdev_aio.c 00:06:57.068 Processing file module/bdev/aio/bdev_aio_rpc.c 00:06:57.068 Processing file module/bdev/delay/vbdev_delay.c 00:06:57.068 Processing file module/bdev/delay/vbdev_delay_rpc.c 00:06:57.068 Processing file module/bdev/error/vbdev_error_rpc.c 00:06:57.068 Processing file module/bdev/error/vbdev_error.c 00:06:57.327 Processing file module/bdev/ftl/bdev_ftl.c 00:06:57.327 Processing file module/bdev/ftl/bdev_ftl_rpc.c 00:06:57.327 Processing file module/bdev/gpt/gpt.c 00:06:57.327 Processing file module/bdev/gpt/vbdev_gpt.c 00:06:57.327 Processing file module/bdev/gpt/gpt.h 00:06:57.327 Processing file module/bdev/iscsi/bdev_iscsi_rpc.c 00:06:57.327 Processing file module/bdev/iscsi/bdev_iscsi.c 00:06:57.586 Processing file module/bdev/lvol/vbdev_lvol_rpc.c 00:06:57.586 Processing file module/bdev/lvol/vbdev_lvol.c 00:06:57.586 Processing file module/bdev/malloc/bdev_malloc_rpc.c 00:06:57.586 Processing file module/bdev/malloc/bdev_malloc.c 00:06:57.586 Processing file module/bdev/null/bdev_null.c 00:06:57.586 Processing file module/bdev/null/bdev_null_rpc.c 00:06:57.856 Processing file module/bdev/nvme/bdev_nvme.c 00:06:57.856 Processing file module/bdev/nvme/vbdev_opal_rpc.c 00:06:57.856 Processing file module/bdev/nvme/bdev_nvme_rpc.c 00:06:57.856 Processing file module/bdev/nvme/nvme_rpc.c 00:06:57.856 Processing file module/bdev/nvme/vbdev_opal.c 00:06:57.856 Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c 00:06:57.856 Processing file module/bdev/nvme/bdev_mdns_client.c 00:06:58.128 Processing file module/bdev/passthru/vbdev_passthru_rpc.c 00:06:58.128 Processing file module/bdev/passthru/vbdev_passthru.c 00:06:58.387 Processing file module/bdev/raid/raid1.c 00:06:58.388 Processing file module/bdev/raid/concat.c 00:06:58.388 Processing file module/bdev/raid/bdev_raid_rpc.c 00:06:58.388 Processing file module/bdev/raid/raid5f.c 00:06:58.388 Processing file module/bdev/raid/bdev_raid.c 00:06:58.388 Processing file module/bdev/raid/bdev_raid_sb.c 00:06:58.388 Processing file module/bdev/raid/raid0.c 00:06:58.388 Processing file module/bdev/raid/bdev_raid.h 00:06:58.388 Processing file module/bdev/split/vbdev_split.c 00:06:58.388 Processing file module/bdev/split/vbdev_split_rpc.c 00:06:58.647 Processing file module/bdev/virtio/bdev_virtio_blk.c 00:06:58.647 Processing file module/bdev/virtio/bdev_virtio_scsi.c 00:06:58.647 Processing file module/bdev/virtio/bdev_virtio_rpc.c 00:06:58.647 Processing file module/bdev/zone_block/vbdev_zone_block.c 00:06:58.647 Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c 00:06:58.647 Processing file module/blob/bdev/blob_bdev.c 00:06:58.905 Processing file module/blobfs/bdev/blobfs_bdev.c 00:06:58.905 Processing file module/blobfs/bdev/blobfs_bdev_rpc.c 00:06:58.905 Processing file module/env_dpdk/env_dpdk_rpc.c 00:06:58.905 Processing file module/event/subsystems/accel/accel.c 00:06:58.905 Processing file module/event/subsystems/bdev/bdev.c 00:06:59.164 Processing file module/event/subsystems/iobuf/iobuf.c 00:06:59.164 Processing file module/event/subsystems/iobuf/iobuf_rpc.c 00:06:59.164 Processing file module/event/subsystems/iscsi/iscsi.c 00:06:59.164 Processing file module/event/subsystems/nbd/nbd.c 00:06:59.424 Processing file module/event/subsystems/nvmf/nvmf_tgt.c 00:06:59.424 Processing file module/event/subsystems/nvmf/nvmf_rpc.c 00:06:59.424 Processing file module/event/subsystems/scheduler/scheduler.c 00:06:59.424 Processing file module/event/subsystems/scsi/scsi.c 00:06:59.424 Processing file module/event/subsystems/sock/sock.c 00:06:59.684 Processing file module/event/subsystems/vhost_blk/vhost_blk.c 00:06:59.684 Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c 00:06:59.684 Processing file module/event/subsystems/vmd/vmd_rpc.c 00:06:59.684 Processing file module/event/subsystems/vmd/vmd.c 00:06:59.684 Processing file module/scheduler/dpdk_governor/dpdk_governor.c 00:06:59.942 Processing file module/scheduler/dynamic/scheduler_dynamic.c 00:06:59.942 Processing file module/scheduler/gscheduler/gscheduler.c 00:06:59.942 Processing file module/sock/sock_kernel.h 00:07:00.201 Processing file module/sock/posix/posix.c 00:07:00.201 Writing directory view page. 00:07:00.201 Overall coverage rate: 00:07:00.201 lines......: 39.0% (39259 of 100583 lines) 00:07:00.202 functions..: 42.7% (3589 of 8404 functions) 00:07:00.202 00:07:00.202 00:07:00.202 ===================== 00:07:00.202 All unit tests passed 00:07:00.202 ===================== 00:07:00.202 Note: coverage report is here: /home/vagrant/spdk_repo/spdk//home/vagrant/spdk_repo/spdk/../output/ut_coverage 00:07:00.202 07:07:33 -- unit/unittest.sh@302 -- # set +x 00:07:00.202 00:07:00.202 00:07:00.202 00:07:00.202 real 2m11.095s 00:07:00.202 user 1m45.242s 00:07:00.202 sys 0m14.497s 00:07:00.202 07:07:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.202 07:07:33 -- common/autotest_common.sh@10 -- # set +x 00:07:00.202 ************************************ 00:07:00.202 END TEST unittest 00:07:00.202 ************************************ 00:07:00.202 07:07:33 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:07:00.202 07:07:33 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:07:00.202 07:07:33 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:07:00.202 07:07:33 -- spdk/autotest.sh@173 -- # timing_enter lib 00:07:00.202 07:07:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:00.202 07:07:33 -- common/autotest_common.sh@10 -- # set +x 00:07:00.202 07:07:33 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:00.202 07:07:33 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:00.202 07:07:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:00.202 07:07:33 -- common/autotest_common.sh@10 -- # set +x 00:07:00.202 ************************************ 00:07:00.202 START TEST env 00:07:00.202 ************************************ 00:07:00.202 07:07:33 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:00.202 * Looking for test storage... 00:07:00.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:00.202 07:07:33 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:00.202 07:07:33 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:00.202 07:07:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:00.202 07:07:33 -- common/autotest_common.sh@10 -- # set +x 00:07:00.461 ************************************ 00:07:00.461 START TEST env_memory 00:07:00.461 ************************************ 00:07:00.461 07:07:33 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:00.461 00:07:00.461 00:07:00.461 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.461 http://cunit.sourceforge.net/ 00:07:00.461 00:07:00.461 00:07:00.461 Suite: memory 00:07:00.461 Test: alloc and free memory map ...[2024-02-13 07:07:33.951594] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:00.461 passed 00:07:00.461 Test: mem map translation ...[2024-02-13 07:07:34.006057] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:00.461 [2024-02-13 07:07:34.006212] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:00.461 [2024-02-13 07:07:34.006345] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:00.461 [2024-02-13 07:07:34.006426] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:00.461 passed 00:07:00.461 Test: mem map registration ...[2024-02-13 07:07:34.090617] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:07:00.461 [2024-02-13 07:07:34.090750] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:07:00.461 passed 00:07:00.720 Test: mem map adjacent registrations ...passed 00:07:00.720 00:07:00.720 Run Summary: Type Total Ran Passed Failed Inactive 00:07:00.720 suites 1 1 n/a 0 0 00:07:00.720 tests 4 4 4 0 0 00:07:00.720 asserts 152 152 152 0 n/a 00:07:00.720 00:07:00.720 Elapsed time = 0.297 seconds 00:07:00.720 00:07:00.720 real 0m0.333s 00:07:00.720 user 0m0.292s 00:07:00.720 sys 0m0.041s 00:07:00.720 07:07:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.720 ************************************ 00:07:00.720 END TEST env_memory 00:07:00.720 ************************************ 00:07:00.720 07:07:34 -- common/autotest_common.sh@10 -- # set +x 00:07:00.720 07:07:34 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:00.720 07:07:34 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:00.720 07:07:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:00.720 07:07:34 -- common/autotest_common.sh@10 -- # set +x 00:07:00.720 ************************************ 00:07:00.720 START TEST env_vtophys 00:07:00.720 ************************************ 00:07:00.720 07:07:34 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:00.720 EAL: lib.eal log level changed from notice to debug 00:07:00.720 EAL: Detected lcore 0 as core 0 on socket 0 00:07:00.720 EAL: Detected lcore 1 as core 0 on socket 0 00:07:00.720 EAL: Detected lcore 2 as core 0 on socket 0 00:07:00.720 EAL: Detected lcore 3 as core 0 on socket 0 00:07:00.720 EAL: Detected lcore 4 as core 0 on socket 0 00:07:00.720 EAL: Detected lcore 5 as core 0 on socket 0 00:07:00.720 EAL: Detected lcore 6 as core 0 on socket 0 00:07:00.720 EAL: Detected lcore 7 as core 0 on socket 0 00:07:00.720 EAL: Detected lcore 8 as core 0 on socket 0 00:07:00.720 EAL: Detected lcore 9 as core 0 on socket 0 00:07:00.720 EAL: Maximum logical cores by configuration: 128 00:07:00.720 EAL: Detected CPU lcores: 10 00:07:00.720 EAL: Detected NUMA nodes: 1 00:07:00.720 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:07:00.720 EAL: Checking presence of .so 'librte_eal.so.24' 00:07:00.720 EAL: Checking presence of .so 'librte_eal.so' 00:07:00.720 EAL: Detected static linkage of DPDK 00:07:00.720 EAL: No shared files mode enabled, IPC will be disabled 00:07:00.720 EAL: Selected IOVA mode 'PA' 00:07:00.720 EAL: Probing VFIO support... 00:07:00.720 EAL: IOMMU type 1 (Type 1) is supported 00:07:00.720 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:00.720 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:00.720 EAL: VFIO support initialized 00:07:00.720 EAL: Ask a virtual area of 0x2e000 bytes 00:07:00.720 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:00.720 EAL: Setting up physically contiguous memory... 00:07:00.720 EAL: Setting maximum number of open files to 1048576 00:07:00.720 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:00.720 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:00.720 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.720 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:00.720 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:00.720 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.720 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:00.720 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:00.720 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.720 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:00.720 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:00.720 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.720 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:00.720 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:00.720 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.720 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:00.720 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:00.720 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.720 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:00.720 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:00.720 EAL: Ask a virtual area of 0x61000 bytes 00:07:00.720 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:00.720 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:00.720 EAL: Ask a virtual area of 0x400000000 bytes 00:07:00.720 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:00.721 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:00.721 EAL: Hugepages will be freed exactly as allocated. 00:07:00.721 EAL: No shared files mode enabled, IPC is disabled 00:07:00.721 EAL: No shared files mode enabled, IPC is disabled 00:07:00.980 EAL: TSC frequency is ~2200000 KHz 00:07:00.980 EAL: Main lcore 0 is ready (tid=7fe9694eea40;cpuset=[0]) 00:07:00.980 EAL: Trying to obtain current memory policy. 00:07:00.980 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:00.980 EAL: Restoring previous memory policy: 0 00:07:00.980 EAL: request: mp_malloc_sync 00:07:00.980 EAL: No shared files mode enabled, IPC is disabled 00:07:00.980 EAL: Heap on socket 0 was expanded by 2MB 00:07:00.980 EAL: No shared files mode enabled, IPC is disabled 00:07:00.980 EAL: Mem event callback 'spdk:(nil)' registered 00:07:00.980 00:07:00.980 00:07:00.980 CUnit - A unit testing framework for C - Version 2.1-3 00:07:00.980 http://cunit.sourceforge.net/ 00:07:00.980 00:07:00.980 00:07:00.980 Suite: components_suite 00:07:01.549 Test: vtophys_malloc_test ...passed 00:07:01.549 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:01.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:01.549 EAL: Restoring previous memory policy: 0 00:07:01.549 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.549 EAL: request: mp_malloc_sync 00:07:01.549 EAL: No shared files mode enabled, IPC is disabled 00:07:01.549 EAL: Heap on socket 0 was expanded by 4MB 00:07:01.549 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.549 EAL: request: mp_malloc_sync 00:07:01.549 EAL: No shared files mode enabled, IPC is disabled 00:07:01.549 EAL: Heap on socket 0 was shrunk by 4MB 00:07:01.549 EAL: Trying to obtain current memory policy. 00:07:01.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:01.549 EAL: Restoring previous memory policy: 0 00:07:01.549 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.549 EAL: request: mp_malloc_sync 00:07:01.549 EAL: No shared files mode enabled, IPC is disabled 00:07:01.549 EAL: Heap on socket 0 was expanded by 6MB 00:07:01.549 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.549 EAL: request: mp_malloc_sync 00:07:01.549 EAL: No shared files mode enabled, IPC is disabled 00:07:01.549 EAL: Heap on socket 0 was shrunk by 6MB 00:07:01.549 EAL: Trying to obtain current memory policy. 00:07:01.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:01.549 EAL: Restoring previous memory policy: 0 00:07:01.549 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.549 EAL: request: mp_malloc_sync 00:07:01.549 EAL: No shared files mode enabled, IPC is disabled 00:07:01.549 EAL: Heap on socket 0 was expanded by 10MB 00:07:01.549 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.549 EAL: request: mp_malloc_sync 00:07:01.549 EAL: No shared files mode enabled, IPC is disabled 00:07:01.549 EAL: Heap on socket 0 was shrunk by 10MB 00:07:01.549 EAL: Trying to obtain current memory policy. 00:07:01.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:01.549 EAL: Restoring previous memory policy: 0 00:07:01.549 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.549 EAL: request: mp_malloc_sync 00:07:01.549 EAL: No shared files mode enabled, IPC is disabled 00:07:01.549 EAL: Heap on socket 0 was expanded by 18MB 00:07:01.549 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.549 EAL: request: mp_malloc_sync 00:07:01.549 EAL: No shared files mode enabled, IPC is disabled 00:07:01.549 EAL: Heap on socket 0 was shrunk by 18MB 00:07:01.549 EAL: Trying to obtain current memory policy. 00:07:01.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:01.549 EAL: Restoring previous memory policy: 0 00:07:01.549 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.549 EAL: request: mp_malloc_sync 00:07:01.549 EAL: No shared files mode enabled, IPC is disabled 00:07:01.549 EAL: Heap on socket 0 was expanded by 34MB 00:07:01.549 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.549 EAL: request: mp_malloc_sync 00:07:01.549 EAL: No shared files mode enabled, IPC is disabled 00:07:01.549 EAL: Heap on socket 0 was shrunk by 34MB 00:07:01.549 EAL: Trying to obtain current memory policy. 00:07:01.549 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:01.549 EAL: Restoring previous memory policy: 0 00:07:01.549 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.549 EAL: request: mp_malloc_sync 00:07:01.549 EAL: No shared files mode enabled, IPC is disabled 00:07:01.549 EAL: Heap on socket 0 was expanded by 66MB 00:07:01.808 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.808 EAL: request: mp_malloc_sync 00:07:01.808 EAL: No shared files mode enabled, IPC is disabled 00:07:01.808 EAL: Heap on socket 0 was shrunk by 66MB 00:07:01.808 EAL: Trying to obtain current memory policy. 00:07:01.808 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:01.808 EAL: Restoring previous memory policy: 0 00:07:01.808 EAL: Calling mem event callback 'spdk:(nil)' 00:07:01.808 EAL: request: mp_malloc_sync 00:07:01.808 EAL: No shared files mode enabled, IPC is disabled 00:07:01.808 EAL: Heap on socket 0 was expanded by 130MB 00:07:02.068 EAL: Calling mem event callback 'spdk:(nil)' 00:07:02.068 EAL: request: mp_malloc_sync 00:07:02.068 EAL: No shared files mode enabled, IPC is disabled 00:07:02.068 EAL: Heap on socket 0 was shrunk by 130MB 00:07:02.326 EAL: Trying to obtain current memory policy. 00:07:02.326 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:02.326 EAL: Restoring previous memory policy: 0 00:07:02.326 EAL: Calling mem event callback 'spdk:(nil)' 00:07:02.326 EAL: request: mp_malloc_sync 00:07:02.326 EAL: No shared files mode enabled, IPC is disabled 00:07:02.326 EAL: Heap on socket 0 was expanded by 258MB 00:07:02.894 EAL: Calling mem event callback 'spdk:(nil)' 00:07:02.894 EAL: request: mp_malloc_sync 00:07:02.894 EAL: No shared files mode enabled, IPC is disabled 00:07:02.894 EAL: Heap on socket 0 was shrunk by 258MB 00:07:03.153 EAL: Trying to obtain current memory policy. 00:07:03.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:03.153 EAL: Restoring previous memory policy: 0 00:07:03.153 EAL: Calling mem event callback 'spdk:(nil)' 00:07:03.153 EAL: request: mp_malloc_sync 00:07:03.153 EAL: No shared files mode enabled, IPC is disabled 00:07:03.153 EAL: Heap on socket 0 was expanded by 514MB 00:07:04.090 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.090 EAL: request: mp_malloc_sync 00:07:04.090 EAL: No shared files mode enabled, IPC is disabled 00:07:04.090 EAL: Heap on socket 0 was shrunk by 514MB 00:07:04.724 EAL: Trying to obtain current memory policy. 00:07:04.724 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:04.983 EAL: Restoring previous memory policy: 0 00:07:04.983 EAL: Calling mem event callback 'spdk:(nil)' 00:07:04.983 EAL: request: mp_malloc_sync 00:07:04.983 EAL: No shared files mode enabled, IPC is disabled 00:07:04.983 EAL: Heap on socket 0 was expanded by 1026MB 00:07:06.889 EAL: Calling mem event callback 'spdk:(nil)' 00:07:06.889 EAL: request: mp_malloc_sync 00:07:06.889 EAL: No shared files mode enabled, IPC is disabled 00:07:06.889 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:08.268 passed 00:07:08.268 00:07:08.268 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.268 suites 1 1 n/a 0 0 00:07:08.268 tests 2 2 2 0 0 00:07:08.268 asserts 6496 6496 6496 0 n/a 00:07:08.268 00:07:08.268 Elapsed time = 7.192 seconds 00:07:08.268 EAL: Calling mem event callback 'spdk:(nil)' 00:07:08.268 EAL: request: mp_malloc_sync 00:07:08.268 EAL: No shared files mode enabled, IPC is disabled 00:07:08.268 EAL: Heap on socket 0 was shrunk by 2MB 00:07:08.268 EAL: No shared files mode enabled, IPC is disabled 00:07:08.268 EAL: No shared files mode enabled, IPC is disabled 00:07:08.268 EAL: No shared files mode enabled, IPC is disabled 00:07:08.268 00:07:08.268 real 0m7.499s 00:07:08.268 user 0m6.294s 00:07:08.268 sys 0m1.065s 00:07:08.268 07:07:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.268 ************************************ 00:07:08.268 END TEST env_vtophys 00:07:08.268 ************************************ 00:07:08.268 07:07:41 -- common/autotest_common.sh@10 -- # set +x 00:07:08.268 07:07:41 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:08.268 07:07:41 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:08.268 07:07:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:08.268 07:07:41 -- common/autotest_common.sh@10 -- # set +x 00:07:08.268 ************************************ 00:07:08.268 START TEST env_pci 00:07:08.268 ************************************ 00:07:08.268 07:07:41 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:08.268 00:07:08.268 00:07:08.268 CUnit - A unit testing framework for C - Version 2.1-3 00:07:08.268 http://cunit.sourceforge.net/ 00:07:08.268 00:07:08.268 00:07:08.268 Suite: pci 00:07:08.268 Test: pci_hook ...[2024-02-13 07:07:41.873194] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 106164 has claimed it 00:07:08.268 passed 00:07:08.268 00:07:08.268 EAL: Cannot find device (10000:00:01.0) 00:07:08.268 EAL: Failed to attach device on primary process 00:07:08.268 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.268 suites 1 1 n/a 0 0 00:07:08.268 tests 1 1 1 0 0 00:07:08.268 asserts 25 25 25 0 n/a 00:07:08.268 00:07:08.268 Elapsed time = 0.008 seconds 00:07:08.268 00:07:08.268 real 0m0.097s 00:07:08.268 user 0m0.051s 00:07:08.268 sys 0m0.047s 00:07:08.268 07:07:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.268 ************************************ 00:07:08.268 END TEST env_pci 00:07:08.268 ************************************ 00:07:08.268 07:07:41 -- common/autotest_common.sh@10 -- # set +x 00:07:08.527 07:07:41 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:08.527 07:07:41 -- env/env.sh@15 -- # uname 00:07:08.527 07:07:41 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:08.527 07:07:41 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:08.527 07:07:41 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:08.527 07:07:41 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:07:08.527 07:07:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:08.527 07:07:41 -- common/autotest_common.sh@10 -- # set +x 00:07:08.527 ************************************ 00:07:08.527 START TEST env_dpdk_post_init 00:07:08.527 ************************************ 00:07:08.527 07:07:41 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:08.527 EAL: Detected CPU lcores: 10 00:07:08.527 EAL: Detected NUMA nodes: 1 00:07:08.527 EAL: Detected static linkage of DPDK 00:07:08.527 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:08.527 EAL: Selected IOVA mode 'PA' 00:07:08.527 EAL: VFIO support initialized 00:07:08.785 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:08.785 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket 0) 00:07:08.785 Starting DPDK initialization... 00:07:08.785 Starting SPDK post initialization... 00:07:08.785 SPDK NVMe probe 00:07:08.785 Attaching to 0000:00:06.0 00:07:08.785 Attached to 0000:00:06.0 00:07:08.785 Cleaning up... 00:07:08.785 00:07:08.785 real 0m0.292s 00:07:08.785 user 0m0.067s 00:07:08.785 sys 0m0.125s 00:07:08.785 07:07:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:08.785 07:07:42 -- common/autotest_common.sh@10 -- # set +x 00:07:08.785 ************************************ 00:07:08.785 END TEST env_dpdk_post_init 00:07:08.785 ************************************ 00:07:08.785 07:07:42 -- env/env.sh@26 -- # uname 00:07:08.785 07:07:42 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:08.785 07:07:42 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:08.785 07:07:42 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:08.785 07:07:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:08.785 07:07:42 -- common/autotest_common.sh@10 -- # set +x 00:07:08.785 ************************************ 00:07:08.785 START TEST env_mem_callbacks 00:07:08.785 ************************************ 00:07:08.785 07:07:42 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:08.785 EAL: Detected CPU lcores: 10 00:07:08.785 EAL: Detected NUMA nodes: 1 00:07:08.785 EAL: Detected static linkage of DPDK 00:07:08.785 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:08.785 EAL: Selected IOVA mode 'PA' 00:07:08.785 EAL: VFIO support initialized 00:07:09.044 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:09.044 00:07:09.044 00:07:09.044 CUnit - A unit testing framework for C - Version 2.1-3 00:07:09.044 http://cunit.sourceforge.net/ 00:07:09.044 00:07:09.044 00:07:09.044 Suite: memory 00:07:09.044 Test: test ... 00:07:09.044 register 0x200000200000 2097152 00:07:09.044 malloc 3145728 00:07:09.044 register 0x200000400000 4194304 00:07:09.044 buf 0x2000004fffc0 len 3145728 PASSED 00:07:09.044 malloc 64 00:07:09.044 buf 0x2000004ffec0 len 64 PASSED 00:07:09.044 malloc 4194304 00:07:09.044 register 0x200000800000 6291456 00:07:09.044 buf 0x2000009fffc0 len 4194304 PASSED 00:07:09.044 free 0x2000004fffc0 3145728 00:07:09.044 free 0x2000004ffec0 64 00:07:09.044 unregister 0x200000400000 4194304 PASSED 00:07:09.044 free 0x2000009fffc0 4194304 00:07:09.044 unregister 0x200000800000 6291456 PASSED 00:07:09.044 malloc 8388608 00:07:09.044 register 0x200000400000 10485760 00:07:09.044 buf 0x2000005fffc0 len 8388608 PASSED 00:07:09.044 free 0x2000005fffc0 8388608 00:07:09.044 unregister 0x200000400000 10485760 PASSED 00:07:09.044 passed 00:07:09.044 00:07:09.044 Run Summary: Type Total Ran Passed Failed Inactive 00:07:09.044 suites 1 1 n/a 0 0 00:07:09.044 tests 1 1 1 0 0 00:07:09.045 asserts 15 15 15 0 n/a 00:07:09.045 00:07:09.045 Elapsed time = 0.053 seconds 00:07:09.045 00:07:09.045 real 0m0.285s 00:07:09.045 user 0m0.129s 00:07:09.045 sys 0m0.056s 00:07:09.045 ************************************ 00:07:09.045 END TEST env_mem_callbacks 00:07:09.045 ************************************ 00:07:09.045 07:07:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.045 07:07:42 -- common/autotest_common.sh@10 -- # set +x 00:07:09.045 00:07:09.045 real 0m8.875s 00:07:09.045 user 0m7.028s 00:07:09.045 sys 0m1.485s 00:07:09.045 07:07:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.045 ************************************ 00:07:09.045 END TEST env 00:07:09.045 ************************************ 00:07:09.045 07:07:42 -- common/autotest_common.sh@10 -- # set +x 00:07:09.045 07:07:42 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:09.045 07:07:42 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:09.045 07:07:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:09.045 07:07:42 -- common/autotest_common.sh@10 -- # set +x 00:07:09.045 ************************************ 00:07:09.045 START TEST rpc 00:07:09.045 ************************************ 00:07:09.045 07:07:42 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:09.304 * Looking for test storage... 00:07:09.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:09.304 07:07:42 -- rpc/rpc.sh@65 -- # spdk_pid=106307 00:07:09.304 07:07:42 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:09.304 07:07:42 -- rpc/rpc.sh@67 -- # waitforlisten 106307 00:07:09.304 07:07:42 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:09.304 07:07:42 -- common/autotest_common.sh@817 -- # '[' -z 106307 ']' 00:07:09.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.304 07:07:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.304 07:07:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:09.304 07:07:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.304 07:07:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:09.304 07:07:42 -- common/autotest_common.sh@10 -- # set +x 00:07:09.304 [2024-02-13 07:07:42.926872] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:07:09.304 [2024-02-13 07:07:42.927174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106307 ] 00:07:09.563 [2024-02-13 07:07:43.105194] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.821 [2024-02-13 07:07:43.377811] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:09.821 [2024-02-13 07:07:43.378045] app.c: 486:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:09.821 [2024-02-13 07:07:43.378089] app.c: 487:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 106307' to capture a snapshot of events at runtime. 00:07:09.821 [2024-02-13 07:07:43.378109] app.c: 492:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid106307 for offline analysis/debug. 00:07:09.821 [2024-02-13 07:07:43.378204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.258 07:07:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.258 07:07:44 -- common/autotest_common.sh@850 -- # return 0 00:07:11.258 07:07:44 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:11.258 07:07:44 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:11.258 07:07:44 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:11.258 07:07:44 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:11.258 07:07:44 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:11.258 07:07:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:11.258 07:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.258 ************************************ 00:07:11.258 START TEST rpc_integrity 00:07:11.258 ************************************ 00:07:11.258 07:07:44 -- common/autotest_common.sh@1102 -- # rpc_integrity 00:07:11.258 07:07:44 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:11.258 07:07:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.258 07:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.258 07:07:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.258 07:07:44 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:11.258 07:07:44 -- rpc/rpc.sh@13 -- # jq length 00:07:11.258 07:07:44 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:11.258 07:07:44 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:11.258 07:07:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.258 07:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.258 07:07:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.258 07:07:44 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:11.258 07:07:44 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:11.258 07:07:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.258 07:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.258 07:07:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.258 07:07:44 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:11.258 { 00:07:11.258 "name": "Malloc0", 00:07:11.258 "aliases": [ 00:07:11.258 "b1ae62d4-c08d-4a22-8701-0f000ed7d2f3" 00:07:11.258 ], 00:07:11.258 "product_name": "Malloc disk", 00:07:11.258 "block_size": 512, 00:07:11.258 "num_blocks": 16384, 00:07:11.258 "uuid": "b1ae62d4-c08d-4a22-8701-0f000ed7d2f3", 00:07:11.258 "assigned_rate_limits": { 00:07:11.258 "rw_ios_per_sec": 0, 00:07:11.258 "rw_mbytes_per_sec": 0, 00:07:11.258 "r_mbytes_per_sec": 0, 00:07:11.258 "w_mbytes_per_sec": 0 00:07:11.258 }, 00:07:11.258 "claimed": false, 00:07:11.258 "zoned": false, 00:07:11.258 "supported_io_types": { 00:07:11.258 "read": true, 00:07:11.258 "write": true, 00:07:11.258 "unmap": true, 00:07:11.258 "write_zeroes": true, 00:07:11.258 "flush": true, 00:07:11.258 "reset": true, 00:07:11.258 "compare": false, 00:07:11.258 "compare_and_write": false, 00:07:11.258 "abort": true, 00:07:11.258 "nvme_admin": false, 00:07:11.258 "nvme_io": false 00:07:11.258 }, 00:07:11.258 "memory_domains": [ 00:07:11.258 { 00:07:11.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.258 "dma_device_type": 2 00:07:11.258 } 00:07:11.258 ], 00:07:11.258 "driver_specific": {} 00:07:11.258 } 00:07:11.258 ]' 00:07:11.258 07:07:44 -- rpc/rpc.sh@17 -- # jq length 00:07:11.258 07:07:44 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:11.258 07:07:44 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:11.258 07:07:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.258 07:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.258 [2024-02-13 07:07:44.786470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:11.258 [2024-02-13 07:07:44.786616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:11.258 [2024-02-13 07:07:44.786663] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:11.258 [2024-02-13 07:07:44.786688] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:11.258 [2024-02-13 07:07:44.789051] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:11.258 [2024-02-13 07:07:44.789196] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:11.258 Passthru0 00:07:11.258 07:07:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.258 07:07:44 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:11.258 07:07:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.258 07:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.258 07:07:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.258 07:07:44 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:11.258 { 00:07:11.258 "name": "Malloc0", 00:07:11.258 "aliases": [ 00:07:11.258 "b1ae62d4-c08d-4a22-8701-0f000ed7d2f3" 00:07:11.258 ], 00:07:11.258 "product_name": "Malloc disk", 00:07:11.258 "block_size": 512, 00:07:11.258 "num_blocks": 16384, 00:07:11.258 "uuid": "b1ae62d4-c08d-4a22-8701-0f000ed7d2f3", 00:07:11.258 "assigned_rate_limits": { 00:07:11.258 "rw_ios_per_sec": 0, 00:07:11.258 "rw_mbytes_per_sec": 0, 00:07:11.258 "r_mbytes_per_sec": 0, 00:07:11.258 "w_mbytes_per_sec": 0 00:07:11.258 }, 00:07:11.258 "claimed": true, 00:07:11.258 "claim_type": "exclusive_write", 00:07:11.258 "zoned": false, 00:07:11.258 "supported_io_types": { 00:07:11.258 "read": true, 00:07:11.258 "write": true, 00:07:11.258 "unmap": true, 00:07:11.258 "write_zeroes": true, 00:07:11.258 "flush": true, 00:07:11.258 "reset": true, 00:07:11.258 "compare": false, 00:07:11.258 "compare_and_write": false, 00:07:11.258 "abort": true, 00:07:11.258 "nvme_admin": false, 00:07:11.258 "nvme_io": false 00:07:11.258 }, 00:07:11.258 "memory_domains": [ 00:07:11.258 { 00:07:11.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.258 "dma_device_type": 2 00:07:11.258 } 00:07:11.258 ], 00:07:11.258 "driver_specific": {} 00:07:11.258 }, 00:07:11.258 { 00:07:11.258 "name": "Passthru0", 00:07:11.258 "aliases": [ 00:07:11.258 "917c345d-d89d-5055-bccf-4e15248f4354" 00:07:11.258 ], 00:07:11.258 "product_name": "passthru", 00:07:11.258 "block_size": 512, 00:07:11.258 "num_blocks": 16384, 00:07:11.258 "uuid": "917c345d-d89d-5055-bccf-4e15248f4354", 00:07:11.258 "assigned_rate_limits": { 00:07:11.258 "rw_ios_per_sec": 0, 00:07:11.258 "rw_mbytes_per_sec": 0, 00:07:11.258 "r_mbytes_per_sec": 0, 00:07:11.258 "w_mbytes_per_sec": 0 00:07:11.258 }, 00:07:11.258 "claimed": false, 00:07:11.258 "zoned": false, 00:07:11.258 "supported_io_types": { 00:07:11.258 "read": true, 00:07:11.258 "write": true, 00:07:11.258 "unmap": true, 00:07:11.258 "write_zeroes": true, 00:07:11.258 "flush": true, 00:07:11.258 "reset": true, 00:07:11.258 "compare": false, 00:07:11.258 "compare_and_write": false, 00:07:11.258 "abort": true, 00:07:11.258 "nvme_admin": false, 00:07:11.258 "nvme_io": false 00:07:11.258 }, 00:07:11.258 "memory_domains": [ 00:07:11.258 { 00:07:11.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.258 "dma_device_type": 2 00:07:11.258 } 00:07:11.258 ], 00:07:11.258 "driver_specific": { 00:07:11.258 "passthru": { 00:07:11.258 "name": "Passthru0", 00:07:11.258 "base_bdev_name": "Malloc0" 00:07:11.258 } 00:07:11.258 } 00:07:11.258 } 00:07:11.258 ]' 00:07:11.258 07:07:44 -- rpc/rpc.sh@21 -- # jq length 00:07:11.258 07:07:44 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:11.258 07:07:44 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:11.258 07:07:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.258 07:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.258 07:07:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.258 07:07:44 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:11.258 07:07:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.258 07:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.258 07:07:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.258 07:07:44 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:11.258 07:07:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.258 07:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.258 07:07:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.258 07:07:44 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:11.258 07:07:44 -- rpc/rpc.sh@26 -- # jq length 00:07:11.518 07:07:44 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:11.518 ************************************ 00:07:11.518 END TEST rpc_integrity 00:07:11.518 ************************************ 00:07:11.518 00:07:11.518 real 0m0.350s 00:07:11.518 user 0m0.216s 00:07:11.518 sys 0m0.045s 00:07:11.518 07:07:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.518 07:07:44 -- common/autotest_common.sh@10 -- # set +x 00:07:11.518 07:07:45 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:11.518 07:07:45 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:11.518 07:07:45 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:11.518 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.518 ************************************ 00:07:11.518 START TEST rpc_plugins 00:07:11.518 ************************************ 00:07:11.518 07:07:45 -- common/autotest_common.sh@1102 -- # rpc_plugins 00:07:11.518 07:07:45 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:11.518 07:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.518 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.518 07:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.518 07:07:45 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:11.518 07:07:45 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:11.518 07:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.518 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.518 07:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.518 07:07:45 -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:11.518 { 00:07:11.518 "name": "Malloc1", 00:07:11.518 "aliases": [ 00:07:11.518 "7122ec67-3fb3-41dd-879b-551b1ee26706" 00:07:11.518 ], 00:07:11.518 "product_name": "Malloc disk", 00:07:11.518 "block_size": 4096, 00:07:11.518 "num_blocks": 256, 00:07:11.518 "uuid": "7122ec67-3fb3-41dd-879b-551b1ee26706", 00:07:11.518 "assigned_rate_limits": { 00:07:11.518 "rw_ios_per_sec": 0, 00:07:11.518 "rw_mbytes_per_sec": 0, 00:07:11.518 "r_mbytes_per_sec": 0, 00:07:11.518 "w_mbytes_per_sec": 0 00:07:11.518 }, 00:07:11.518 "claimed": false, 00:07:11.518 "zoned": false, 00:07:11.518 "supported_io_types": { 00:07:11.518 "read": true, 00:07:11.518 "write": true, 00:07:11.518 "unmap": true, 00:07:11.518 "write_zeroes": true, 00:07:11.518 "flush": true, 00:07:11.518 "reset": true, 00:07:11.518 "compare": false, 00:07:11.518 "compare_and_write": false, 00:07:11.518 "abort": true, 00:07:11.518 "nvme_admin": false, 00:07:11.518 "nvme_io": false 00:07:11.518 }, 00:07:11.518 "memory_domains": [ 00:07:11.518 { 00:07:11.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:11.518 "dma_device_type": 2 00:07:11.518 } 00:07:11.518 ], 00:07:11.518 "driver_specific": {} 00:07:11.518 } 00:07:11.518 ]' 00:07:11.518 07:07:45 -- rpc/rpc.sh@32 -- # jq length 00:07:11.518 07:07:45 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:11.518 07:07:45 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:11.518 07:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.518 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.518 07:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.518 07:07:45 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:11.518 07:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.518 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.518 07:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.518 07:07:45 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:11.518 07:07:45 -- rpc/rpc.sh@36 -- # jq length 00:07:11.518 07:07:45 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:11.518 ************************************ 00:07:11.518 END TEST rpc_plugins 00:07:11.518 ************************************ 00:07:11.518 00:07:11.518 real 0m0.161s 00:07:11.518 user 0m0.107s 00:07:11.518 sys 0m0.019s 00:07:11.518 07:07:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:11.518 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.777 07:07:45 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:11.777 07:07:45 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:11.777 07:07:45 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:11.777 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.777 ************************************ 00:07:11.777 START TEST rpc_trace_cmd_test 00:07:11.777 ************************************ 00:07:11.777 07:07:45 -- common/autotest_common.sh@1102 -- # rpc_trace_cmd_test 00:07:11.777 07:07:45 -- rpc/rpc.sh@40 -- # local info 00:07:11.777 07:07:45 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:11.777 07:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.777 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:11.777 07:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.777 07:07:45 -- rpc/rpc.sh@42 -- # info='{ 00:07:11.777 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid106307", 00:07:11.777 "tpoint_group_mask": "0x8", 00:07:11.777 "iscsi_conn": { 00:07:11.777 "mask": "0x2", 00:07:11.777 "tpoint_mask": "0x0" 00:07:11.777 }, 00:07:11.777 "scsi": { 00:07:11.777 "mask": "0x4", 00:07:11.777 "tpoint_mask": "0x0" 00:07:11.777 }, 00:07:11.777 "bdev": { 00:07:11.777 "mask": "0x8", 00:07:11.777 "tpoint_mask": "0xffffffffffffffff" 00:07:11.777 }, 00:07:11.777 "nvmf_rdma": { 00:07:11.777 "mask": "0x10", 00:07:11.777 "tpoint_mask": "0x0" 00:07:11.777 }, 00:07:11.777 "nvmf_tcp": { 00:07:11.777 "mask": "0x20", 00:07:11.777 "tpoint_mask": "0x0" 00:07:11.777 }, 00:07:11.777 "ftl": { 00:07:11.777 "mask": "0x40", 00:07:11.777 "tpoint_mask": "0x0" 00:07:11.777 }, 00:07:11.777 "blobfs": { 00:07:11.777 "mask": "0x80", 00:07:11.777 "tpoint_mask": "0x0" 00:07:11.777 }, 00:07:11.777 "dsa": { 00:07:11.777 "mask": "0x200", 00:07:11.777 "tpoint_mask": "0x0" 00:07:11.777 }, 00:07:11.777 "thread": { 00:07:11.777 "mask": "0x400", 00:07:11.777 "tpoint_mask": "0x0" 00:07:11.777 }, 00:07:11.777 "nvme_pcie": { 00:07:11.777 "mask": "0x800", 00:07:11.777 "tpoint_mask": "0x0" 00:07:11.777 }, 00:07:11.777 "iaa": { 00:07:11.778 "mask": "0x1000", 00:07:11.778 "tpoint_mask": "0x0" 00:07:11.778 }, 00:07:11.778 "nvme_tcp": { 00:07:11.778 "mask": "0x2000", 00:07:11.778 "tpoint_mask": "0x0" 00:07:11.778 }, 00:07:11.778 "bdev_nvme": { 00:07:11.778 "mask": "0x4000", 00:07:11.778 "tpoint_mask": "0x0" 00:07:11.778 } 00:07:11.778 }' 00:07:11.778 07:07:45 -- rpc/rpc.sh@43 -- # jq length 00:07:11.778 07:07:45 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:07:11.778 07:07:45 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:11.778 07:07:45 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:11.778 07:07:45 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:11.778 07:07:45 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:11.778 07:07:45 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:12.036 07:07:45 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:12.036 07:07:45 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:12.036 07:07:45 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:12.036 00:07:12.036 real 0m0.301s 00:07:12.036 user 0m0.272s 00:07:12.036 sys 0m0.024s 00:07:12.036 07:07:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.036 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:12.036 ************************************ 00:07:12.036 END TEST rpc_trace_cmd_test 00:07:12.036 ************************************ 00:07:12.036 07:07:45 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:12.036 07:07:45 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:12.036 07:07:45 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:12.036 07:07:45 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:12.036 07:07:45 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:12.036 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:12.036 ************************************ 00:07:12.036 START TEST rpc_daemon_integrity 00:07:12.036 ************************************ 00:07:12.036 07:07:45 -- common/autotest_common.sh@1102 -- # rpc_integrity 00:07:12.036 07:07:45 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:12.037 07:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.037 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:12.037 07:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.037 07:07:45 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:12.037 07:07:45 -- rpc/rpc.sh@13 -- # jq length 00:07:12.037 07:07:45 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:12.037 07:07:45 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:12.037 07:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.037 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:12.037 07:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.037 07:07:45 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:12.037 07:07:45 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:12.037 07:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.037 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:12.037 07:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.037 07:07:45 -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:12.037 { 00:07:12.037 "name": "Malloc2", 00:07:12.037 "aliases": [ 00:07:12.037 "61c906b8-9abd-4245-9286-3e2f94185715" 00:07:12.037 ], 00:07:12.037 "product_name": "Malloc disk", 00:07:12.037 "block_size": 512, 00:07:12.037 "num_blocks": 16384, 00:07:12.037 "uuid": "61c906b8-9abd-4245-9286-3e2f94185715", 00:07:12.037 "assigned_rate_limits": { 00:07:12.037 "rw_ios_per_sec": 0, 00:07:12.037 "rw_mbytes_per_sec": 0, 00:07:12.037 "r_mbytes_per_sec": 0, 00:07:12.037 "w_mbytes_per_sec": 0 00:07:12.037 }, 00:07:12.037 "claimed": false, 00:07:12.037 "zoned": false, 00:07:12.037 "supported_io_types": { 00:07:12.037 "read": true, 00:07:12.037 "write": true, 00:07:12.037 "unmap": true, 00:07:12.037 "write_zeroes": true, 00:07:12.037 "flush": true, 00:07:12.037 "reset": true, 00:07:12.037 "compare": false, 00:07:12.037 "compare_and_write": false, 00:07:12.037 "abort": true, 00:07:12.037 "nvme_admin": false, 00:07:12.037 "nvme_io": false 00:07:12.037 }, 00:07:12.037 "memory_domains": [ 00:07:12.037 { 00:07:12.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.037 "dma_device_type": 2 00:07:12.037 } 00:07:12.037 ], 00:07:12.037 "driver_specific": {} 00:07:12.037 } 00:07:12.037 ]' 00:07:12.037 07:07:45 -- rpc/rpc.sh@17 -- # jq length 00:07:12.296 07:07:45 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:12.296 07:07:45 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:12.296 07:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.296 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:12.296 [2024-02-13 07:07:45.772364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:12.296 [2024-02-13 07:07:45.772469] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.296 [2024-02-13 07:07:45.772511] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:12.296 [2024-02-13 07:07:45.772533] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.296 [2024-02-13 07:07:45.775247] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.296 [2024-02-13 07:07:45.775335] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:12.296 Passthru0 00:07:12.296 07:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.296 07:07:45 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:12.296 07:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.296 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:12.296 07:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.296 07:07:45 -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:12.296 { 00:07:12.296 "name": "Malloc2", 00:07:12.296 "aliases": [ 00:07:12.296 "61c906b8-9abd-4245-9286-3e2f94185715" 00:07:12.296 ], 00:07:12.296 "product_name": "Malloc disk", 00:07:12.296 "block_size": 512, 00:07:12.296 "num_blocks": 16384, 00:07:12.296 "uuid": "61c906b8-9abd-4245-9286-3e2f94185715", 00:07:12.296 "assigned_rate_limits": { 00:07:12.296 "rw_ios_per_sec": 0, 00:07:12.296 "rw_mbytes_per_sec": 0, 00:07:12.296 "r_mbytes_per_sec": 0, 00:07:12.296 "w_mbytes_per_sec": 0 00:07:12.296 }, 00:07:12.296 "claimed": true, 00:07:12.296 "claim_type": "exclusive_write", 00:07:12.296 "zoned": false, 00:07:12.296 "supported_io_types": { 00:07:12.296 "read": true, 00:07:12.296 "write": true, 00:07:12.296 "unmap": true, 00:07:12.296 "write_zeroes": true, 00:07:12.296 "flush": true, 00:07:12.296 "reset": true, 00:07:12.296 "compare": false, 00:07:12.296 "compare_and_write": false, 00:07:12.296 "abort": true, 00:07:12.296 "nvme_admin": false, 00:07:12.296 "nvme_io": false 00:07:12.296 }, 00:07:12.296 "memory_domains": [ 00:07:12.296 { 00:07:12.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.296 "dma_device_type": 2 00:07:12.296 } 00:07:12.296 ], 00:07:12.296 "driver_specific": {} 00:07:12.296 }, 00:07:12.296 { 00:07:12.296 "name": "Passthru0", 00:07:12.296 "aliases": [ 00:07:12.296 "4585996d-09ab-541f-8636-cdce7d398604" 00:07:12.296 ], 00:07:12.296 "product_name": "passthru", 00:07:12.296 "block_size": 512, 00:07:12.296 "num_blocks": 16384, 00:07:12.296 "uuid": "4585996d-09ab-541f-8636-cdce7d398604", 00:07:12.296 "assigned_rate_limits": { 00:07:12.296 "rw_ios_per_sec": 0, 00:07:12.296 "rw_mbytes_per_sec": 0, 00:07:12.296 "r_mbytes_per_sec": 0, 00:07:12.296 "w_mbytes_per_sec": 0 00:07:12.296 }, 00:07:12.296 "claimed": false, 00:07:12.296 "zoned": false, 00:07:12.296 "supported_io_types": { 00:07:12.296 "read": true, 00:07:12.296 "write": true, 00:07:12.296 "unmap": true, 00:07:12.296 "write_zeroes": true, 00:07:12.296 "flush": true, 00:07:12.296 "reset": true, 00:07:12.296 "compare": false, 00:07:12.296 "compare_and_write": false, 00:07:12.296 "abort": true, 00:07:12.296 "nvme_admin": false, 00:07:12.296 "nvme_io": false 00:07:12.296 }, 00:07:12.296 "memory_domains": [ 00:07:12.296 { 00:07:12.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.296 "dma_device_type": 2 00:07:12.296 } 00:07:12.296 ], 00:07:12.296 "driver_specific": { 00:07:12.296 "passthru": { 00:07:12.296 "name": "Passthru0", 00:07:12.296 "base_bdev_name": "Malloc2" 00:07:12.296 } 00:07:12.296 } 00:07:12.296 } 00:07:12.296 ]' 00:07:12.296 07:07:45 -- rpc/rpc.sh@21 -- # jq length 00:07:12.296 07:07:45 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:12.296 07:07:45 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:12.296 07:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.296 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:12.296 07:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.296 07:07:45 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:12.296 07:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.296 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:12.296 07:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.296 07:07:45 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:12.296 07:07:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:12.296 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:12.296 07:07:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:12.296 07:07:45 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:12.296 07:07:45 -- rpc/rpc.sh@26 -- # jq length 00:07:12.296 07:07:45 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:12.296 00:07:12.296 real 0m0.366s 00:07:12.296 user 0m0.234s 00:07:12.296 sys 0m0.037s 00:07:12.296 07:07:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.296 ************************************ 00:07:12.296 END TEST rpc_daemon_integrity 00:07:12.296 ************************************ 00:07:12.296 07:07:45 -- common/autotest_common.sh@10 -- # set +x 00:07:12.555 07:07:46 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:12.555 07:07:46 -- rpc/rpc.sh@84 -- # killprocess 106307 00:07:12.555 07:07:46 -- common/autotest_common.sh@924 -- # '[' -z 106307 ']' 00:07:12.555 07:07:46 -- common/autotest_common.sh@928 -- # kill -0 106307 00:07:12.555 07:07:46 -- common/autotest_common.sh@929 -- # uname 00:07:12.555 07:07:46 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:12.555 07:07:46 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 106307 00:07:12.555 07:07:46 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:07:12.555 killing process with pid 106307 00:07:12.555 07:07:46 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:07:12.555 07:07:46 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 106307' 00:07:12.555 07:07:46 -- common/autotest_common.sh@943 -- # kill 106307 00:07:12.555 07:07:46 -- common/autotest_common.sh@948 -- # wait 106307 00:07:15.092 00:07:15.092 real 0m5.467s 00:07:15.092 user 0m6.432s 00:07:15.092 sys 0m0.872s 00:07:15.092 07:07:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.092 07:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:15.092 ************************************ 00:07:15.092 END TEST rpc 00:07:15.092 ************************************ 00:07:15.092 07:07:48 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:15.092 07:07:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:15.092 07:07:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:15.092 07:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:15.092 ************************************ 00:07:15.092 START TEST rpc_client 00:07:15.092 ************************************ 00:07:15.092 07:07:48 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:07:15.092 * Looking for test storage... 00:07:15.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:07:15.092 07:07:48 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:07:15.092 OK 00:07:15.092 07:07:48 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:15.092 00:07:15.092 real 0m0.142s 00:07:15.092 user 0m0.096s 00:07:15.092 sys 0m0.056s 00:07:15.092 07:07:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:15.092 ************************************ 00:07:15.092 END TEST rpc_client 00:07:15.092 ************************************ 00:07:15.092 07:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:15.092 07:07:48 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:15.092 07:07:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:15.092 07:07:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:15.092 07:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:15.092 ************************************ 00:07:15.092 START TEST json_config 00:07:15.092 ************************************ 00:07:15.092 07:07:48 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:07:15.092 07:07:48 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:15.092 07:07:48 -- nvmf/common.sh@7 -- # uname -s 00:07:15.092 07:07:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.092 07:07:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.092 07:07:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.092 07:07:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.092 07:07:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.092 07:07:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.092 07:07:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.092 07:07:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.092 07:07:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.092 07:07:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.092 07:07:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fee4e509-1a2c-4b99-9bc8-6ca633dd30bd 00:07:15.092 07:07:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=fee4e509-1a2c-4b99-9bc8-6ca633dd30bd 00:07:15.092 07:07:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.092 07:07:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.092 07:07:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:15.092 07:07:48 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:15.092 07:07:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.092 07:07:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.092 07:07:48 -- nvmf/common.sh@46 -- # : 0 00:07:15.092 07:07:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:15.092 07:07:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:15.092 07:07:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:15.092 07:07:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.092 07:07:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.092 07:07:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:15.092 07:07:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:15.092 07:07:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:15.092 07:07:48 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:07:15.092 07:07:48 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:07:15.092 07:07:48 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:07:15.092 07:07:48 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:15.092 07:07:48 -- json_config/json_config.sh@30 -- # app_pid=([target]="" [initiator]="") 00:07:15.092 07:07:48 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:07:15.092 07:07:48 -- json_config/json_config.sh@31 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock' [initiator]='/var/tmp/spdk_initiator.sock') 00:07:15.092 07:07:48 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:07:15.092 07:07:48 -- json_config/json_config.sh@32 -- # app_params=([target]='-m 0x1 -s 1024' [initiator]='-m 0x2 -g -u -s 1024') 00:07:15.092 07:07:48 -- json_config/json_config.sh@32 -- # declare -A app_params 00:07:15.092 07:07:48 -- json_config/json_config.sh@33 -- # configs_path=([target]="$rootdir/spdk_tgt_config.json" [initiator]="$rootdir/spdk_initiator_config.json") 00:07:15.092 07:07:48 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:07:15.092 07:07:48 -- json_config/json_config.sh@43 -- # last_event_id=0 00:07:15.092 07:07:48 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:15.092 07:07:48 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:07:15.092 INFO: JSON configuration test init 00:07:15.092 07:07:48 -- json_config/json_config.sh@420 -- # json_config_test_init 00:07:15.092 07:07:48 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:07:15.092 07:07:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:15.092 07:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:15.092 07:07:48 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:07:15.092 07:07:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:15.092 07:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:15.092 07:07:48 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:07:15.092 07:07:48 -- json_config/json_config.sh@98 -- # local app=target 00:07:15.092 07:07:48 -- json_config/json_config.sh@99 -- # shift 00:07:15.092 07:07:48 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:15.092 07:07:48 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:15.092 07:07:48 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:15.092 07:07:48 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:15.092 07:07:48 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:15.092 07:07:48 -- json_config/json_config.sh@111 -- # app_pid[$app]=106610 00:07:15.092 07:07:48 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:15.092 07:07:48 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:15.092 Waiting for target to run... 00:07:15.092 07:07:48 -- json_config/json_config.sh@114 -- # waitforlisten 106610 /var/tmp/spdk_tgt.sock 00:07:15.092 07:07:48 -- common/autotest_common.sh@817 -- # '[' -z 106610 ']' 00:07:15.092 07:07:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:15.092 07:07:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:15.092 07:07:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:15.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:15.092 07:07:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:15.092 07:07:48 -- common/autotest_common.sh@10 -- # set +x 00:07:15.092 [2024-02-13 07:07:48.608941] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:07:15.092 [2024-02-13 07:07:48.609228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106610 ] 00:07:15.660 [2024-02-13 07:07:49.116481] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.660 [2024-02-13 07:07:49.286335] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:15.660 [2024-02-13 07:07:49.286537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.920 00:07:15.920 07:07:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:15.920 07:07:49 -- common/autotest_common.sh@850 -- # return 0 00:07:15.920 07:07:49 -- json_config/json_config.sh@115 -- # echo '' 00:07:15.920 07:07:49 -- json_config/json_config.sh@322 -- # create_accel_config 00:07:15.920 07:07:49 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:07:15.920 07:07:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:15.920 07:07:49 -- common/autotest_common.sh@10 -- # set +x 00:07:15.920 07:07:49 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:07:15.920 07:07:49 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:07:15.920 07:07:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:15.920 07:07:49 -- common/autotest_common.sh@10 -- # set +x 00:07:16.178 07:07:49 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:16.178 07:07:49 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:07:16.178 07:07:49 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:17.116 07:07:50 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:07:17.116 07:07:50 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:07:17.116 07:07:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:17.116 07:07:50 -- common/autotest_common.sh@10 -- # set +x 00:07:17.116 07:07:50 -- json_config/json_config.sh@48 -- # local ret=0 00:07:17.116 07:07:50 -- json_config/json_config.sh@49 -- # enabled_types=("bdev_register" "bdev_unregister") 00:07:17.116 07:07:50 -- json_config/json_config.sh@49 -- # local enabled_types 00:07:17.116 07:07:50 -- json_config/json_config.sh@51 -- # get_types=($(tgt_rpc notify_get_types | jq -r '.[]')) 00:07:17.116 07:07:50 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:17.116 07:07:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:17.116 07:07:50 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:17.116 07:07:50 -- json_config/json_config.sh@51 -- # local get_types 00:07:17.116 07:07:50 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:07:17.116 07:07:50 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:07:17.116 07:07:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:17.116 07:07:50 -- common/autotest_common.sh@10 -- # set +x 00:07:17.375 07:07:50 -- json_config/json_config.sh@58 -- # return 0 00:07:17.375 07:07:50 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:07:17.375 07:07:50 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:07:17.375 07:07:50 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:07:17.375 07:07:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:17.375 07:07:50 -- common/autotest_common.sh@10 -- # set +x 00:07:17.375 07:07:50 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:07:17.375 07:07:50 -- json_config/json_config.sh@160 -- # local expected_notifications 00:07:17.375 07:07:50 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:07:17.375 07:07:50 -- json_config/json_config.sh@164 -- # get_notifications 00:07:17.375 07:07:50 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:07:17.375 07:07:50 -- json_config/json_config.sh@64 -- # IFS=: 00:07:17.375 07:07:50 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:17.375 07:07:50 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:07:17.375 07:07:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:17.375 07:07:50 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:17.634 07:07:51 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:07:17.634 07:07:51 -- json_config/json_config.sh@64 -- # IFS=: 00:07:17.634 07:07:51 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:17.634 07:07:51 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:07:17.634 07:07:51 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:07:17.634 07:07:51 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:07:17.634 07:07:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:07:17.634 Nvme0n1p0 Nvme0n1p1 00:07:17.634 07:07:51 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:07:17.634 07:07:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:07:17.893 [2024-02-13 07:07:51.548937] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:17.893 [2024-02-13 07:07:51.549055] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:17.893 00:07:17.893 07:07:51 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:07:17.893 07:07:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:07:18.151 Malloc3 00:07:18.151 07:07:51 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:18.151 07:07:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:07:18.410 [2024-02-13 07:07:52.059516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:18.410 [2024-02-13 07:07:52.059624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.410 [2024-02-13 07:07:52.059662] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:18.410 [2024-02-13 07:07:52.059695] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.410 [2024-02-13 07:07:52.062039] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.410 [2024-02-13 07:07:52.062093] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:18.410 PTBdevFromMalloc3 00:07:18.410 07:07:52 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:07:18.410 07:07:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:07:18.678 Null0 00:07:18.678 07:07:52 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:07:18.678 07:07:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:07:18.943 Malloc0 00:07:18.943 07:07:52 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:07:18.943 07:07:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:07:19.202 Malloc1 00:07:19.202 07:07:52 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:07:19.202 07:07:52 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:07:19.460 102400+0 records in 00:07:19.460 102400+0 records out 00:07:19.460 104857600 bytes (105 MB, 100 MiB) copied, 0.321094 s, 327 MB/s 00:07:19.460 07:07:53 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:07:19.460 07:07:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:07:19.720 aio_disk 00:07:19.720 07:07:53 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:07:19.720 07:07:53 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:19.720 07:07:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:07:19.979 24ade80b-5e3e-4603-9eda-ea9bdcb11501 00:07:19.979 07:07:53 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:07:19.979 07:07:53 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:07:19.979 07:07:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:07:20.238 07:07:53 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:07:20.238 07:07:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:07:20.497 07:07:53 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:20.497 07:07:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:07:20.756 07:07:54 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:20.756 07:07:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:07:21.016 07:07:54 -- json_config/json_config.sh@210 -- # [[ 0 -eq 1 ]] 00:07:21.016 07:07:54 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:07:21.016 07:07:54 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:9fb626d8-eac8-4df5-be05-4b175cb42f9b bdev_register:947af528-540c-4aff-86db-25d6d602a1a7 bdev_register:bc5bd9d2-6dbb-4843-9b44-5eb03223e129 bdev_register:284ebca9-d972-4996-a138-fcfaa27d6bc2 00:07:21.016 07:07:54 -- json_config/json_config.sh@70 -- # local events_to_check 00:07:21.016 07:07:54 -- json_config/json_config.sh@71 -- # local recorded_events 00:07:21.016 07:07:54 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:07:21.016 07:07:54 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:9fb626d8-eac8-4df5-be05-4b175cb42f9b bdev_register:947af528-540c-4aff-86db-25d6d602a1a7 bdev_register:bc5bd9d2-6dbb-4843-9b44-5eb03223e129 bdev_register:284ebca9-d972-4996-a138-fcfaa27d6bc2 00:07:21.016 07:07:54 -- json_config/json_config.sh@74 -- # sort 00:07:21.016 07:07:54 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:07:21.016 07:07:54 -- json_config/json_config.sh@75 -- # get_notifications 00:07:21.016 07:07:54 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:07:21.016 07:07:54 -- json_config/json_config.sh@75 -- # sort 00:07:21.016 07:07:54 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.016 07:07:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.016 07:07:54 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:07:21.016 07:07:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:07:21.016 07:07:54 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:07:21.275 07:07:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:07:21.275 07:07:54 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.275 07:07:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.275 07:07:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:07:21.275 07:07:54 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.275 07:07:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.275 07:07:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:07:21.275 07:07:54 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.275 07:07:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.275 07:07:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:07:21.275 07:07:54 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.275 07:07:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.275 07:07:54 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:07:21.275 07:07:54 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.275 07:07:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.275 07:07:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:07:21.275 07:07:54 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.275 07:07:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.275 07:07:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:07:21.275 07:07:54 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.276 07:07:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.276 07:07:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.276 07:07:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.276 07:07:54 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.276 07:07:54 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.276 07:07:54 -- json_config/json_config.sh@65 -- # echo bdev_register:9fb626d8-eac8-4df5-be05-4b175cb42f9b 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.276 07:07:54 -- json_config/json_config.sh@65 -- # echo bdev_register:947af528-540c-4aff-86db-25d6d602a1a7 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.276 07:07:54 -- json_config/json_config.sh@65 -- # echo bdev_register:bc5bd9d2-6dbb-4843-9b44-5eb03223e129 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.276 07:07:54 -- json_config/json_config.sh@65 -- # echo bdev_register:284ebca9-d972-4996-a138-fcfaa27d6bc2 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # IFS=: 00:07:21.276 07:07:54 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:07:21.276 07:07:54 -- json_config/json_config.sh@77 -- # [[ bdev_register:284ebca9-d972-4996-a138-fcfaa27d6bc2 bdev_register:947af528-540c-4aff-86db-25d6d602a1a7 bdev_register:9fb626d8-eac8-4df5-be05-4b175cb42f9b bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:bc5bd9d2-6dbb-4843-9b44-5eb03223e129 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\8\4\e\b\c\a\9\-\d\9\7\2\-\4\9\9\6\-\a\1\3\8\-\f\c\f\a\a\2\7\d\6\b\c\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\4\7\a\f\5\2\8\-\5\4\0\c\-\4\a\f\f\-\8\6\d\b\-\2\5\d\6\d\6\0\2\a\1\a\7\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\f\b\6\2\6\d\8\-\e\a\c\8\-\4\d\f\5\-\b\e\0\5\-\4\b\1\7\5\c\b\4\2\f\9\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\b\c\5\b\d\9\d\2\-\6\d\b\b\-\4\8\4\3\-\9\b\4\4\-\5\e\b\0\3\2\2\3\e\1\2\9 ]] 00:07:21.276 07:07:54 -- json_config/json_config.sh@89 -- # cat 00:07:21.276 07:07:54 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:284ebca9-d972-4996-a138-fcfaa27d6bc2 bdev_register:947af528-540c-4aff-86db-25d6d602a1a7 bdev_register:9fb626d8-eac8-4df5-be05-4b175cb42f9b bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:bc5bd9d2-6dbb-4843-9b44-5eb03223e129 00:07:21.276 Expected events matched: 00:07:21.276 bdev_register:284ebca9-d972-4996-a138-fcfaa27d6bc2 00:07:21.276 bdev_register:947af528-540c-4aff-86db-25d6d602a1a7 00:07:21.276 bdev_register:9fb626d8-eac8-4df5-be05-4b175cb42f9b 00:07:21.276 bdev_register:Malloc0 00:07:21.276 bdev_register:Malloc0p0 00:07:21.276 bdev_register:Malloc0p1 00:07:21.276 bdev_register:Malloc0p2 00:07:21.276 bdev_register:Malloc1 00:07:21.276 bdev_register:Malloc3 00:07:21.276 bdev_register:Null0 00:07:21.276 bdev_register:Nvme0n1 00:07:21.276 bdev_register:Nvme0n1p0 00:07:21.276 bdev_register:Nvme0n1p1 00:07:21.276 bdev_register:PTBdevFromMalloc3 00:07:21.276 bdev_register:aio_disk 00:07:21.276 bdev_register:bc5bd9d2-6dbb-4843-9b44-5eb03223e129 00:07:21.276 07:07:54 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:07:21.276 07:07:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:21.276 07:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:21.276 07:07:54 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:07:21.276 07:07:54 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:07:21.276 07:07:54 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:07:21.276 07:07:54 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:07:21.276 07:07:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:21.276 07:07:54 -- common/autotest_common.sh@10 -- # set +x 00:07:21.276 07:07:54 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:07:21.276 07:07:54 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:21.276 07:07:54 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:21.535 MallocBdevForConfigChangeCheck 00:07:21.535 07:07:55 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:07:21.535 07:07:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:21.535 07:07:55 -- common/autotest_common.sh@10 -- # set +x 00:07:21.535 07:07:55 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:07:21.535 07:07:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:22.103 INFO: shutting down applications... 00:07:22.103 07:07:55 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:07:22.103 07:07:55 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:07:22.103 07:07:55 -- json_config/json_config.sh@431 -- # json_config_clear target 00:07:22.103 07:07:55 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:07:22.103 07:07:55 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:22.103 [2024-02-13 07:07:55.734305] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:07:22.362 Calling clear_vhost_scsi_subsystem 00:07:22.362 Calling clear_iscsi_subsystem 00:07:22.362 Calling clear_vhost_blk_subsystem 00:07:22.362 Calling clear_nbd_subsystem 00:07:22.362 Calling clear_nvmf_subsystem 00:07:22.362 Calling clear_bdev_subsystem 00:07:22.362 Calling clear_accel_subsystem 00:07:22.362 Calling clear_iobuf_subsystem 00:07:22.362 Calling clear_sock_subsystem 00:07:22.362 Calling clear_vmd_subsystem 00:07:22.362 Calling clear_scheduler_subsystem 00:07:22.362 07:07:55 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:07:22.362 07:07:55 -- json_config/json_config.sh@396 -- # count=100 00:07:22.362 07:07:55 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:07:22.362 07:07:55 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:22.362 07:07:55 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:22.362 07:07:55 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:07:22.934 07:07:56 -- json_config/json_config.sh@398 -- # break 00:07:22.934 07:07:56 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:07:22.934 07:07:56 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:07:22.934 07:07:56 -- json_config/json_config.sh@120 -- # local app=target 00:07:22.934 07:07:56 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:07:22.934 07:07:56 -- json_config/json_config.sh@124 -- # [[ -n 106610 ]] 00:07:22.934 07:07:56 -- json_config/json_config.sh@127 -- # kill -SIGINT 106610 00:07:22.934 07:07:56 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:07:22.934 07:07:56 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:22.934 07:07:56 -- json_config/json_config.sh@130 -- # kill -0 106610 00:07:22.934 07:07:56 -- json_config/json_config.sh@134 -- # sleep 0.5 00:07:23.201 07:07:56 -- json_config/json_config.sh@129 -- # (( i++ )) 00:07:23.201 07:07:56 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:23.201 07:07:56 -- json_config/json_config.sh@130 -- # kill -0 106610 00:07:23.201 07:07:56 -- json_config/json_config.sh@134 -- # sleep 0.5 00:07:23.770 SPDK target shutdown done 00:07:23.770 INFO: relaunching applications... 00:07:23.770 Waiting for target to run... 00:07:23.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:23.770 07:07:57 -- json_config/json_config.sh@129 -- # (( i++ )) 00:07:23.770 07:07:57 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:07:23.770 07:07:57 -- json_config/json_config.sh@130 -- # kill -0 106610 00:07:23.770 07:07:57 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:07:23.770 07:07:57 -- json_config/json_config.sh@132 -- # break 00:07:23.770 07:07:57 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:07:23.770 07:07:57 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:07:23.770 07:07:57 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:07:23.770 07:07:57 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:23.770 07:07:57 -- json_config/json_config.sh@98 -- # local app=target 00:07:23.770 07:07:57 -- json_config/json_config.sh@99 -- # shift 00:07:23.770 07:07:57 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:07:23.770 07:07:57 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:07:23.770 07:07:57 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:07:23.770 07:07:57 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:23.770 07:07:57 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:07:23.770 07:07:57 -- json_config/json_config.sh@111 -- # app_pid[$app]=106878 00:07:23.770 07:07:57 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:07:23.770 07:07:57 -- json_config/json_config.sh@114 -- # waitforlisten 106878 /var/tmp/spdk_tgt.sock 00:07:23.770 07:07:57 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:23.770 07:07:57 -- common/autotest_common.sh@817 -- # '[' -z 106878 ']' 00:07:23.770 07:07:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:23.770 07:07:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:23.770 07:07:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:23.770 07:07:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:23.770 07:07:57 -- common/autotest_common.sh@10 -- # set +x 00:07:23.770 [2024-02-13 07:07:57.420425] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:07:23.770 [2024-02-13 07:07:57.420671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106878 ] 00:07:24.337 [2024-02-13 07:07:57.958170] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.596 [2024-02-13 07:07:58.148009] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:24.596 [2024-02-13 07:07:58.148279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.596 [2024-02-13 07:07:58.148347] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:25.162 [2024-02-13 07:07:58.822367] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:25.162 [2024-02-13 07:07:58.822489] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:07:25.162 [2024-02-13 07:07:58.830311] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:25.162 [2024-02-13 07:07:58.830372] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:07:25.162 [2024-02-13 07:07:58.838349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:25.162 [2024-02-13 07:07:58.838420] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:25.162 [2024-02-13 07:07:58.838452] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:25.421 [2024-02-13 07:07:58.927132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:25.421 [2024-02-13 07:07:58.927235] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:25.421 [2024-02-13 07:07:58.927271] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:25.421 [2024-02-13 07:07:58.927299] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:25.421 [2024-02-13 07:07:58.927822] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:25.421 [2024-02-13 07:07:58.927872] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:07:25.679 00:07:25.679 INFO: Checking if target configuration is the same... 00:07:25.679 07:07:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:25.679 07:07:59 -- common/autotest_common.sh@850 -- # return 0 00:07:25.679 07:07:59 -- json_config/json_config.sh@115 -- # echo '' 00:07:25.679 07:07:59 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:07:25.679 07:07:59 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:25.679 07:07:59 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:25.679 07:07:59 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:07:25.679 07:07:59 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:25.679 + '[' 2 -ne 2 ']' 00:07:25.679 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:25.679 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:25.679 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:25.679 +++ basename /dev/fd/62 00:07:25.679 ++ mktemp /tmp/62.XXX 00:07:25.679 + tmp_file_1=/tmp/62.JvV 00:07:25.679 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:25.679 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:25.679 + tmp_file_2=/tmp/spdk_tgt_config.json.CLs 00:07:25.679 + ret=0 00:07:25.679 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:25.938 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:26.197 + diff -u /tmp/62.JvV /tmp/spdk_tgt_config.json.CLs 00:07:26.197 INFO: JSON config files are the same 00:07:26.197 + echo 'INFO: JSON config files are the same' 00:07:26.197 + rm /tmp/62.JvV /tmp/spdk_tgt_config.json.CLs 00:07:26.197 + exit 0 00:07:26.197 INFO: changing configuration and checking if this can be detected... 00:07:26.197 07:07:59 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:07:26.197 07:07:59 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:26.197 07:07:59 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:26.197 07:07:59 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:26.456 07:07:59 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:26.456 07:07:59 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:07:26.456 07:07:59 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:26.456 + '[' 2 -ne 2 ']' 00:07:26.456 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:07:26.456 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:07:26.456 + rootdir=/home/vagrant/spdk_repo/spdk 00:07:26.456 +++ basename /dev/fd/62 00:07:26.456 ++ mktemp /tmp/62.XXX 00:07:26.456 + tmp_file_1=/tmp/62.XKw 00:07:26.456 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:26.456 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:26.456 + tmp_file_2=/tmp/spdk_tgt_config.json.py2 00:07:26.456 + ret=0 00:07:26.456 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:26.715 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:07:26.715 + diff -u /tmp/62.XKw /tmp/spdk_tgt_config.json.py2 00:07:26.715 + ret=1 00:07:26.715 + echo '=== Start of file: /tmp/62.XKw ===' 00:07:26.715 + cat /tmp/62.XKw 00:07:26.715 + echo '=== End of file: /tmp/62.XKw ===' 00:07:26.715 + echo '' 00:07:26.715 + echo '=== Start of file: /tmp/spdk_tgt_config.json.py2 ===' 00:07:26.715 + cat /tmp/spdk_tgt_config.json.py2 00:07:26.715 + echo '=== End of file: /tmp/spdk_tgt_config.json.py2 ===' 00:07:26.715 + echo '' 00:07:26.715 + rm /tmp/62.XKw /tmp/spdk_tgt_config.json.py2 00:07:26.715 + exit 1 00:07:26.715 INFO: configuration change detected. 00:07:26.715 07:08:00 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:07:26.715 07:08:00 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:07:26.715 07:08:00 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:07:26.715 07:08:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:26.715 07:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:26.715 07:08:00 -- json_config/json_config.sh@360 -- # local ret=0 00:07:26.715 07:08:00 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:07:26.715 07:08:00 -- json_config/json_config.sh@370 -- # [[ -n 106878 ]] 00:07:26.715 07:08:00 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:07:26.715 07:08:00 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:07:26.715 07:08:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:26.715 07:08:00 -- common/autotest_common.sh@10 -- # set +x 00:07:26.715 07:08:00 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:07:26.715 07:08:00 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:07:26.715 07:08:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:07:26.974 07:08:00 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:07:26.974 07:08:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:07:27.232 07:08:00 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:07:27.232 07:08:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:07:27.491 07:08:00 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:07:27.491 07:08:00 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:07:27.750 07:08:01 -- json_config/json_config.sh@246 -- # uname -s 00:07:27.750 07:08:01 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:07:27.750 07:08:01 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:07:27.750 07:08:01 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:07:27.750 07:08:01 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:07:27.750 07:08:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:27.750 07:08:01 -- common/autotest_common.sh@10 -- # set +x 00:07:27.750 07:08:01 -- json_config/json_config.sh@376 -- # killprocess 106878 00:07:27.750 07:08:01 -- common/autotest_common.sh@924 -- # '[' -z 106878 ']' 00:07:27.750 07:08:01 -- common/autotest_common.sh@928 -- # kill -0 106878 00:07:27.750 07:08:01 -- common/autotest_common.sh@929 -- # uname 00:07:27.750 07:08:01 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:27.750 07:08:01 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 106878 00:07:27.750 killing process with pid 106878 00:07:27.750 07:08:01 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:07:27.750 07:08:01 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:07:27.750 07:08:01 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 106878' 00:07:27.750 07:08:01 -- common/autotest_common.sh@943 -- # kill 106878 00:07:27.750 07:08:01 -- common/autotest_common.sh@948 -- # wait 106878 00:07:27.751 [2024-02-13 07:08:01.268339] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:28.696 07:08:02 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:07:28.696 07:08:02 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:07:28.696 07:08:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:28.696 07:08:02 -- common/autotest_common.sh@10 -- # set +x 00:07:28.696 INFO: Success 00:07:28.696 07:08:02 -- json_config/json_config.sh@381 -- # return 0 00:07:28.696 07:08:02 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:07:28.696 00:07:28.696 real 0m13.782s 00:07:28.696 user 0m19.962s 00:07:28.696 sys 0m2.578s 00:07:28.696 07:08:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:28.696 07:08:02 -- common/autotest_common.sh@10 -- # set +x 00:07:28.696 ************************************ 00:07:28.696 END TEST json_config 00:07:28.696 ************************************ 00:07:28.696 07:08:02 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:28.696 07:08:02 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:28.696 07:08:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:28.697 07:08:02 -- common/autotest_common.sh@10 -- # set +x 00:07:28.697 ************************************ 00:07:28.697 START TEST json_config_extra_key 00:07:28.697 ************************************ 00:07:28.697 07:08:02 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:28.697 07:08:02 -- nvmf/common.sh@7 -- # uname -s 00:07:28.697 07:08:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.697 07:08:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.697 07:08:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.697 07:08:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.697 07:08:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.697 07:08:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.697 07:08:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.697 07:08:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.697 07:08:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.697 07:08:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.697 07:08:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5edd5465-1028-434d-969e-f0d4fdff9234 00:07:28.697 07:08:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5edd5465-1028-434d-969e-f0d4fdff9234 00:07:28.697 07:08:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.697 07:08:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.697 07:08:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:28.697 07:08:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:28.697 07:08:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.697 07:08:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.697 07:08:02 -- nvmf/common.sh@46 -- # : 0 00:07:28.697 07:08:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:28.697 07:08:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:28.697 07:08:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:28.697 07:08:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.697 07:08:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.697 07:08:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:28.697 07:08:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:28.697 07:08:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@16 -- # app_pid=([target]="") 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@17 -- # app_socket=([target]='/var/tmp/spdk_tgt.sock') 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@18 -- # app_params=([target]='-m 0x1 -s 1024') 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@19 -- # configs_path=([target]="$rootdir/test/json_config/extra_key.json") 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:07:28.697 INFO: launching applications... 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@25 -- # shift 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=107071 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:07:28.697 Waiting for target to run... 00:07:28.697 07:08:02 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 107071 /var/tmp/spdk_tgt.sock 00:07:28.697 07:08:02 -- common/autotest_common.sh@817 -- # '[' -z 107071 ']' 00:07:28.697 07:08:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:28.697 07:08:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:28.697 07:08:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:28.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:28.697 07:08:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:28.697 07:08:02 -- common/autotest_common.sh@10 -- # set +x 00:07:28.955 [2024-02-13 07:08:02.441881] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:07:28.955 [2024-02-13 07:08:02.442106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107071 ] 00:07:29.539 [2024-02-13 07:08:02.974441] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.539 [2024-02-13 07:08:03.143360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:29.539 [2024-02-13 07:08:03.143644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.540 [2024-02-13 07:08:03.143711] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:07:30.488 00:07:30.488 INFO: shutting down applications... 00:07:30.488 07:08:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:30.488 07:08:04 -- common/autotest_common.sh@850 -- # return 0 00:07:30.488 07:08:04 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:07:30.488 07:08:04 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:07:30.488 07:08:04 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:07:30.488 07:08:04 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:07:30.488 07:08:04 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:07:30.488 07:08:04 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 107071 ]] 00:07:30.488 07:08:04 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 107071 00:07:30.488 07:08:04 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:07:30.488 [2024-02-13 07:08:04.056157] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:07:30.488 07:08:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:30.488 07:08:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 107071 00:07:30.488 07:08:04 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:31.054 07:08:04 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:31.054 07:08:04 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:31.054 07:08:04 -- json_config/json_config_extra_key.sh@50 -- # kill -0 107071 00:07:31.054 07:08:04 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:31.621 07:08:05 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:31.621 07:08:05 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:31.621 07:08:05 -- json_config/json_config_extra_key.sh@50 -- # kill -0 107071 00:07:31.621 07:08:05 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:31.880 07:08:05 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:31.880 07:08:05 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:31.880 07:08:05 -- json_config/json_config_extra_key.sh@50 -- # kill -0 107071 00:07:31.880 07:08:05 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:32.448 07:08:06 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:32.448 07:08:06 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:32.448 07:08:06 -- json_config/json_config_extra_key.sh@50 -- # kill -0 107071 00:07:32.448 07:08:06 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:07:33.015 SPDK target shutdown done 00:07:33.015 Success 00:07:33.015 07:08:06 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:07:33.015 07:08:06 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:07:33.015 07:08:06 -- json_config/json_config_extra_key.sh@50 -- # kill -0 107071 00:07:33.015 07:08:06 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:07:33.015 07:08:06 -- json_config/json_config_extra_key.sh@52 -- # break 00:07:33.015 07:08:06 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:07:33.015 07:08:06 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:07:33.015 07:08:06 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:07:33.015 00:07:33.015 real 0m4.291s 00:07:33.015 user 0m4.075s 00:07:33.015 sys 0m0.617s 00:07:33.015 07:08:06 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:33.015 07:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:33.015 ************************************ 00:07:33.015 END TEST json_config_extra_key 00:07:33.015 ************************************ 00:07:33.015 07:08:06 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:33.015 07:08:06 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:33.015 07:08:06 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:33.015 07:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:33.015 ************************************ 00:07:33.015 START TEST alias_rpc 00:07:33.015 ************************************ 00:07:33.016 07:08:06 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:33.016 * Looking for test storage... 00:07:33.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:07:33.275 07:08:06 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:33.275 07:08:06 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=107196 00:07:33.275 07:08:06 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:33.275 07:08:06 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 107196 00:07:33.275 07:08:06 -- common/autotest_common.sh@817 -- # '[' -z 107196 ']' 00:07:33.275 07:08:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.275 07:08:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:33.275 07:08:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.275 07:08:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:33.275 07:08:06 -- common/autotest_common.sh@10 -- # set +x 00:07:33.275 [2024-02-13 07:08:06.795378] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:07:33.275 [2024-02-13 07:08:06.795602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107196 ] 00:07:33.275 [2024-02-13 07:08:06.962124] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.534 [2024-02-13 07:08:07.163461] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:33.534 [2024-02-13 07:08:07.163797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.910 07:08:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:34.910 07:08:08 -- common/autotest_common.sh@850 -- # return 0 00:07:34.910 07:08:08 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:07:35.169 07:08:08 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 107196 00:07:35.169 07:08:08 -- common/autotest_common.sh@924 -- # '[' -z 107196 ']' 00:07:35.169 07:08:08 -- common/autotest_common.sh@928 -- # kill -0 107196 00:07:35.169 07:08:08 -- common/autotest_common.sh@929 -- # uname 00:07:35.169 07:08:08 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:35.169 07:08:08 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 107196 00:07:35.169 killing process with pid 107196 00:07:35.169 07:08:08 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:07:35.169 07:08:08 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:07:35.169 07:08:08 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 107196' 00:07:35.169 07:08:08 -- common/autotest_common.sh@943 -- # kill 107196 00:07:35.169 07:08:08 -- common/autotest_common.sh@948 -- # wait 107196 00:07:37.712 00:07:37.712 real 0m4.273s 00:07:37.712 user 0m4.588s 00:07:37.712 sys 0m0.631s 00:07:37.712 07:08:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.712 ************************************ 00:07:37.712 END TEST alias_rpc 00:07:37.712 ************************************ 00:07:37.712 07:08:10 -- common/autotest_common.sh@10 -- # set +x 00:07:37.712 07:08:10 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:07:37.712 07:08:10 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:37.712 07:08:10 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:37.712 07:08:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:37.712 07:08:10 -- common/autotest_common.sh@10 -- # set +x 00:07:37.712 ************************************ 00:07:37.712 START TEST spdkcli_tcp 00:07:37.712 ************************************ 00:07:37.712 07:08:10 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:07:37.712 * Looking for test storage... 00:07:37.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:07:37.712 07:08:11 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:07:37.712 07:08:11 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:07:37.712 07:08:11 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:07:37.712 07:08:11 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:37.712 07:08:11 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:37.712 07:08:11 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:37.712 07:08:11 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:37.712 07:08:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:37.712 07:08:11 -- common/autotest_common.sh@10 -- # set +x 00:07:37.712 07:08:11 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=107307 00:07:37.712 07:08:11 -- spdkcli/tcp.sh@27 -- # waitforlisten 107307 00:07:37.712 07:08:11 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:37.712 07:08:11 -- common/autotest_common.sh@817 -- # '[' -z 107307 ']' 00:07:37.712 07:08:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.712 07:08:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:37.712 07:08:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.712 07:08:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:37.712 07:08:11 -- common/autotest_common.sh@10 -- # set +x 00:07:37.712 [2024-02-13 07:08:11.114338] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:07:37.712 [2024-02-13 07:08:11.114556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107307 ] 00:07:37.712 [2024-02-13 07:08:11.285296] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:37.971 [2024-02-13 07:08:11.496062] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:37.971 [2024-02-13 07:08:11.496447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.971 [2024-02-13 07:08:11.496454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.347 07:08:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:39.347 07:08:12 -- common/autotest_common.sh@850 -- # return 0 00:07:39.347 07:08:12 -- spdkcli/tcp.sh@31 -- # socat_pid=107343 00:07:39.347 07:08:12 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:39.347 07:08:12 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:39.347 [ 00:07:39.347 "spdk_get_version", 00:07:39.347 "rpc_get_methods", 00:07:39.347 "trace_get_info", 00:07:39.347 "trace_get_tpoint_group_mask", 00:07:39.347 "trace_disable_tpoint_group", 00:07:39.347 "trace_enable_tpoint_group", 00:07:39.347 "trace_clear_tpoint_mask", 00:07:39.347 "trace_set_tpoint_mask", 00:07:39.348 "framework_get_pci_devices", 00:07:39.348 "framework_get_config", 00:07:39.348 "framework_get_subsystems", 00:07:39.348 "iobuf_get_stats", 00:07:39.348 "iobuf_set_options", 00:07:39.348 "sock_set_default_impl", 00:07:39.348 "sock_impl_set_options", 00:07:39.348 "sock_impl_get_options", 00:07:39.348 "vmd_rescan", 00:07:39.348 "vmd_remove_device", 00:07:39.348 "vmd_enable", 00:07:39.348 "accel_get_stats", 00:07:39.348 "accel_set_options", 00:07:39.348 "accel_set_driver", 00:07:39.348 "accel_crypto_key_destroy", 00:07:39.348 "accel_crypto_keys_get", 00:07:39.348 "accel_crypto_key_create", 00:07:39.348 "accel_assign_opc", 00:07:39.348 "accel_get_module_info", 00:07:39.348 "accel_get_opc_assignments", 00:07:39.348 "notify_get_notifications", 00:07:39.348 "notify_get_types", 00:07:39.348 "bdev_get_histogram", 00:07:39.348 "bdev_enable_histogram", 00:07:39.348 "bdev_set_qos_limit", 00:07:39.348 "bdev_set_qd_sampling_period", 00:07:39.348 "bdev_get_bdevs", 00:07:39.348 "bdev_reset_iostat", 00:07:39.348 "bdev_get_iostat", 00:07:39.348 "bdev_examine", 00:07:39.348 "bdev_wait_for_examine", 00:07:39.348 "bdev_set_options", 00:07:39.348 "scsi_get_devices", 00:07:39.348 "thread_set_cpumask", 00:07:39.348 "framework_get_scheduler", 00:07:39.348 "framework_set_scheduler", 00:07:39.348 "framework_get_reactors", 00:07:39.348 "thread_get_io_channels", 00:07:39.348 "thread_get_pollers", 00:07:39.348 "thread_get_stats", 00:07:39.348 "framework_monitor_context_switch", 00:07:39.348 "spdk_kill_instance", 00:07:39.348 "log_enable_timestamps", 00:07:39.348 "log_get_flags", 00:07:39.348 "log_clear_flag", 00:07:39.348 "log_set_flag", 00:07:39.348 "log_get_level", 00:07:39.348 "log_set_level", 00:07:39.348 "log_get_print_level", 00:07:39.348 "log_set_print_level", 00:07:39.348 "framework_enable_cpumask_locks", 00:07:39.348 "framework_disable_cpumask_locks", 00:07:39.348 "framework_wait_init", 00:07:39.348 "framework_start_init", 00:07:39.348 "virtio_blk_create_transport", 00:07:39.348 "virtio_blk_get_transports", 00:07:39.348 "vhost_controller_set_coalescing", 00:07:39.348 "vhost_get_controllers", 00:07:39.348 "vhost_delete_controller", 00:07:39.348 "vhost_create_blk_controller", 00:07:39.348 "vhost_scsi_controller_remove_target", 00:07:39.348 "vhost_scsi_controller_add_target", 00:07:39.348 "vhost_start_scsi_controller", 00:07:39.348 "vhost_create_scsi_controller", 00:07:39.348 "nbd_get_disks", 00:07:39.348 "nbd_stop_disk", 00:07:39.348 "nbd_start_disk", 00:07:39.348 "env_dpdk_get_mem_stats", 00:07:39.348 "nvmf_subsystem_get_listeners", 00:07:39.348 "nvmf_subsystem_get_qpairs", 00:07:39.348 "nvmf_subsystem_get_controllers", 00:07:39.348 "nvmf_get_stats", 00:07:39.348 "nvmf_get_transports", 00:07:39.348 "nvmf_create_transport", 00:07:39.348 "nvmf_get_targets", 00:07:39.348 "nvmf_delete_target", 00:07:39.348 "nvmf_create_target", 00:07:39.348 "nvmf_subsystem_allow_any_host", 00:07:39.348 "nvmf_subsystem_remove_host", 00:07:39.348 "nvmf_subsystem_add_host", 00:07:39.348 "nvmf_subsystem_remove_ns", 00:07:39.348 "nvmf_subsystem_add_ns", 00:07:39.348 "nvmf_subsystem_listener_set_ana_state", 00:07:39.348 "nvmf_discovery_get_referrals", 00:07:39.348 "nvmf_discovery_remove_referral", 00:07:39.348 "nvmf_discovery_add_referral", 00:07:39.348 "nvmf_subsystem_remove_listener", 00:07:39.348 "nvmf_subsystem_add_listener", 00:07:39.348 "nvmf_delete_subsystem", 00:07:39.348 "nvmf_create_subsystem", 00:07:39.348 "nvmf_get_subsystems", 00:07:39.348 "nvmf_set_crdt", 00:07:39.348 "nvmf_set_config", 00:07:39.348 "nvmf_set_max_subsystems", 00:07:39.348 "iscsi_set_options", 00:07:39.348 "iscsi_get_auth_groups", 00:07:39.348 "iscsi_auth_group_remove_secret", 00:07:39.348 "iscsi_auth_group_add_secret", 00:07:39.348 "iscsi_delete_auth_group", 00:07:39.348 "iscsi_create_auth_group", 00:07:39.348 "iscsi_set_discovery_auth", 00:07:39.348 "iscsi_get_options", 00:07:39.348 "iscsi_target_node_request_logout", 00:07:39.348 "iscsi_target_node_set_redirect", 00:07:39.348 "iscsi_target_node_set_auth", 00:07:39.348 "iscsi_target_node_add_lun", 00:07:39.348 "iscsi_get_connections", 00:07:39.348 "iscsi_portal_group_set_auth", 00:07:39.348 "iscsi_start_portal_group", 00:07:39.348 "iscsi_delete_portal_group", 00:07:39.348 "iscsi_create_portal_group", 00:07:39.348 "iscsi_get_portal_groups", 00:07:39.348 "iscsi_delete_target_node", 00:07:39.348 "iscsi_target_node_remove_pg_ig_maps", 00:07:39.348 "iscsi_target_node_add_pg_ig_maps", 00:07:39.348 "iscsi_create_target_node", 00:07:39.348 "iscsi_get_target_nodes", 00:07:39.348 "iscsi_delete_initiator_group", 00:07:39.348 "iscsi_initiator_group_remove_initiators", 00:07:39.348 "iscsi_initiator_group_add_initiators", 00:07:39.348 "iscsi_create_initiator_group", 00:07:39.348 "iscsi_get_initiator_groups", 00:07:39.348 "iaa_scan_accel_module", 00:07:39.348 "dsa_scan_accel_module", 00:07:39.348 "ioat_scan_accel_module", 00:07:39.348 "accel_error_inject_error", 00:07:39.348 "bdev_iscsi_delete", 00:07:39.348 "bdev_iscsi_create", 00:07:39.348 "bdev_iscsi_set_options", 00:07:39.348 "bdev_virtio_attach_controller", 00:07:39.348 "bdev_virtio_scsi_get_devices", 00:07:39.348 "bdev_virtio_detach_controller", 00:07:39.348 "bdev_virtio_blk_set_hotplug", 00:07:39.348 "bdev_ftl_set_property", 00:07:39.348 "bdev_ftl_get_properties", 00:07:39.348 "bdev_ftl_get_stats", 00:07:39.348 "bdev_ftl_unmap", 00:07:39.348 "bdev_ftl_unload", 00:07:39.348 "bdev_ftl_delete", 00:07:39.348 "bdev_ftl_load", 00:07:39.348 "bdev_ftl_create", 00:07:39.348 "bdev_aio_delete", 00:07:39.348 "bdev_aio_rescan", 00:07:39.348 "bdev_aio_create", 00:07:39.348 "blobfs_create", 00:07:39.348 "blobfs_detect", 00:07:39.348 "blobfs_set_cache_size", 00:07:39.348 "bdev_zone_block_delete", 00:07:39.348 "bdev_zone_block_create", 00:07:39.348 "bdev_delay_delete", 00:07:39.348 "bdev_delay_create", 00:07:39.348 "bdev_delay_update_latency", 00:07:39.348 "bdev_split_delete", 00:07:39.348 "bdev_split_create", 00:07:39.348 "bdev_error_inject_error", 00:07:39.348 "bdev_error_delete", 00:07:39.348 "bdev_error_create", 00:07:39.348 "bdev_raid_set_options", 00:07:39.348 "bdev_raid_remove_base_bdev", 00:07:39.348 "bdev_raid_add_base_bdev", 00:07:39.348 "bdev_raid_delete", 00:07:39.348 "bdev_raid_create", 00:07:39.348 "bdev_raid_get_bdevs", 00:07:39.348 "bdev_lvol_grow_lvstore", 00:07:39.348 "bdev_lvol_get_lvols", 00:07:39.348 "bdev_lvol_get_lvstores", 00:07:39.348 "bdev_lvol_delete", 00:07:39.348 "bdev_lvol_set_read_only", 00:07:39.348 "bdev_lvol_resize", 00:07:39.348 "bdev_lvol_decouple_parent", 00:07:39.348 "bdev_lvol_inflate", 00:07:39.348 "bdev_lvol_rename", 00:07:39.348 "bdev_lvol_clone_bdev", 00:07:39.348 "bdev_lvol_clone", 00:07:39.348 "bdev_lvol_snapshot", 00:07:39.348 "bdev_lvol_create", 00:07:39.348 "bdev_lvol_delete_lvstore", 00:07:39.348 "bdev_lvol_rename_lvstore", 00:07:39.348 "bdev_lvol_create_lvstore", 00:07:39.348 "bdev_passthru_delete", 00:07:39.348 "bdev_passthru_create", 00:07:39.348 "bdev_nvme_cuse_unregister", 00:07:39.348 "bdev_nvme_cuse_register", 00:07:39.348 "bdev_opal_new_user", 00:07:39.348 "bdev_opal_set_lock_state", 00:07:39.348 "bdev_opal_delete", 00:07:39.348 "bdev_opal_get_info", 00:07:39.348 "bdev_opal_create", 00:07:39.348 "bdev_nvme_opal_revert", 00:07:39.348 "bdev_nvme_opal_init", 00:07:39.348 "bdev_nvme_send_cmd", 00:07:39.348 "bdev_nvme_get_path_iostat", 00:07:39.348 "bdev_nvme_get_mdns_discovery_info", 00:07:39.348 "bdev_nvme_stop_mdns_discovery", 00:07:39.348 "bdev_nvme_start_mdns_discovery", 00:07:39.348 "bdev_nvme_set_multipath_policy", 00:07:39.348 "bdev_nvme_set_preferred_path", 00:07:39.348 "bdev_nvme_get_io_paths", 00:07:39.348 "bdev_nvme_remove_error_injection", 00:07:39.348 "bdev_nvme_add_error_injection", 00:07:39.348 "bdev_nvme_get_discovery_info", 00:07:39.348 "bdev_nvme_stop_discovery", 00:07:39.348 "bdev_nvme_start_discovery", 00:07:39.348 "bdev_nvme_get_controller_health_info", 00:07:39.348 "bdev_nvme_disable_controller", 00:07:39.348 "bdev_nvme_enable_controller", 00:07:39.348 "bdev_nvme_reset_controller", 00:07:39.348 "bdev_nvme_get_transport_statistics", 00:07:39.348 "bdev_nvme_apply_firmware", 00:07:39.348 "bdev_nvme_detach_controller", 00:07:39.348 "bdev_nvme_get_controllers", 00:07:39.348 "bdev_nvme_attach_controller", 00:07:39.348 "bdev_nvme_set_hotplug", 00:07:39.348 "bdev_nvme_set_options", 00:07:39.348 "bdev_null_resize", 00:07:39.348 "bdev_null_delete", 00:07:39.348 "bdev_null_create", 00:07:39.348 "bdev_malloc_delete", 00:07:39.348 "bdev_malloc_create" 00:07:39.348 ] 00:07:39.348 07:08:12 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:39.348 07:08:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:39.348 07:08:12 -- common/autotest_common.sh@10 -- # set +x 00:07:39.608 07:08:13 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:39.608 07:08:13 -- spdkcli/tcp.sh@38 -- # killprocess 107307 00:07:39.608 07:08:13 -- common/autotest_common.sh@924 -- # '[' -z 107307 ']' 00:07:39.608 07:08:13 -- common/autotest_common.sh@928 -- # kill -0 107307 00:07:39.608 07:08:13 -- common/autotest_common.sh@929 -- # uname 00:07:39.608 07:08:13 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:39.608 07:08:13 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 107307 00:07:39.608 07:08:13 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:07:39.608 07:08:13 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:07:39.608 07:08:13 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 107307' 00:07:39.608 killing process with pid 107307 00:07:39.608 07:08:13 -- common/autotest_common.sh@943 -- # kill 107307 00:07:39.608 07:08:13 -- common/autotest_common.sh@948 -- # wait 107307 00:07:41.514 00:07:41.514 real 0m4.130s 00:07:41.514 user 0m7.646s 00:07:41.514 sys 0m0.589s 00:07:41.514 07:08:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.514 07:08:15 -- common/autotest_common.sh@10 -- # set +x 00:07:41.514 ************************************ 00:07:41.514 END TEST spdkcli_tcp 00:07:41.514 ************************************ 00:07:41.514 07:08:15 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:41.514 07:08:15 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:41.514 07:08:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:41.514 07:08:15 -- common/autotest_common.sh@10 -- # set +x 00:07:41.514 ************************************ 00:07:41.514 START TEST dpdk_mem_utility 00:07:41.514 ************************************ 00:07:41.514 07:08:15 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:41.773 * Looking for test storage... 00:07:41.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:07:41.773 07:08:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:41.773 07:08:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=107447 00:07:41.773 07:08:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 107447 00:07:41.773 07:08:15 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:07:41.773 07:08:15 -- common/autotest_common.sh@817 -- # '[' -z 107447 ']' 00:07:41.773 07:08:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.773 07:08:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:41.773 07:08:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.773 07:08:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:41.773 07:08:15 -- common/autotest_common.sh@10 -- # set +x 00:07:41.773 [2024-02-13 07:08:15.292802] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:07:41.773 [2024-02-13 07:08:15.293018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107447 ] 00:07:42.032 [2024-02-13 07:08:15.462887] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.032 [2024-02-13 07:08:15.667921] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:42.032 [2024-02-13 07:08:15.668123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.409 07:08:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:43.410 07:08:17 -- common/autotest_common.sh@850 -- # return 0 00:07:43.410 07:08:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:43.410 07:08:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:43.410 07:08:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:43.410 07:08:17 -- common/autotest_common.sh@10 -- # set +x 00:07:43.410 { 00:07:43.410 "filename": "/tmp/spdk_mem_dump.txt" 00:07:43.410 } 00:07:43.410 07:08:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:43.410 07:08:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:07:43.410 DPDK memory size 820.000000 MiB in 1 heap(s) 00:07:43.410 1 heaps totaling size 820.000000 MiB 00:07:43.410 size: 820.000000 MiB heap id: 0 00:07:43.410 end heaps---------- 00:07:43.410 8 mempools totaling size 598.116089 MiB 00:07:43.410 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:43.410 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:43.410 size: 84.521057 MiB name: bdev_io_107447 00:07:43.410 size: 51.011292 MiB name: evtpool_107447 00:07:43.410 size: 50.003479 MiB name: msgpool_107447 00:07:43.410 size: 21.763794 MiB name: PDU_Pool 00:07:43.410 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:43.410 size: 0.026123 MiB name: Session_Pool 00:07:43.410 end mempools------- 00:07:43.410 6 memzones totaling size 4.142822 MiB 00:07:43.410 size: 1.000366 MiB name: RG_ring_0_107447 00:07:43.410 size: 1.000366 MiB name: RG_ring_1_107447 00:07:43.410 size: 1.000366 MiB name: RG_ring_4_107447 00:07:43.410 size: 1.000366 MiB name: RG_ring_5_107447 00:07:43.410 size: 0.125366 MiB name: RG_ring_2_107447 00:07:43.410 size: 0.015991 MiB name: RG_ring_3_107447 00:07:43.410 end memzones------- 00:07:43.410 07:08:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:07:43.670 heap id: 0 total size: 820.000000 MiB number of busy elements: 226 number of free elements: 18 00:07:43.670 list of free elements. size: 18.469727 MiB 00:07:43.670 element at address: 0x200000400000 with size: 1.999451 MiB 00:07:43.670 element at address: 0x200000800000 with size: 1.996887 MiB 00:07:43.670 element at address: 0x200007000000 with size: 1.995972 MiB 00:07:43.670 element at address: 0x20000b200000 with size: 1.995972 MiB 00:07:43.670 element at address: 0x200019100040 with size: 0.999939 MiB 00:07:43.670 element at address: 0x200019500040 with size: 0.999939 MiB 00:07:43.670 element at address: 0x200019600000 with size: 0.999329 MiB 00:07:43.670 element at address: 0x200003e00000 with size: 0.996094 MiB 00:07:43.670 element at address: 0x200032200000 with size: 0.994324 MiB 00:07:43.670 element at address: 0x200018e00000 with size: 0.959656 MiB 00:07:43.670 element at address: 0x200019900040 with size: 0.937256 MiB 00:07:43.670 element at address: 0x200000200000 with size: 0.835083 MiB 00:07:43.670 element at address: 0x20001b000000 with size: 0.561218 MiB 00:07:43.670 element at address: 0x200019200000 with size: 0.489197 MiB 00:07:43.670 element at address: 0x200019a00000 with size: 0.485413 MiB 00:07:43.670 element at address: 0x200013800000 with size: 0.468140 MiB 00:07:43.670 element at address: 0x200028400000 with size: 0.399719 MiB 00:07:43.670 element at address: 0x200003a00000 with size: 0.356140 MiB 00:07:43.670 list of standard malloc elements. size: 199.265869 MiB 00:07:43.670 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:07:43.670 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:07:43.670 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:07:43.670 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:07:43.670 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:07:43.670 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:07:43.670 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:07:43.670 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:07:43.670 element at address: 0x20000b1ff380 with size: 0.000366 MiB 00:07:43.670 element at address: 0x20000b1ff040 with size: 0.000305 MiB 00:07:43.670 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:07:43.670 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d6180 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d6280 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d6380 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d6480 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d6580 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d6680 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d6780 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d6880 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d6980 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d6a80 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:07:43.670 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:07:43.671 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:07:43.671 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:07:43.671 element at address: 0x200003aff980 with size: 0.000244 MiB 00:07:43.671 element at address: 0x200003affa80 with size: 0.000244 MiB 00:07:43.671 element at address: 0x200003eff000 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20000b1ff180 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20000b1ff280 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:07:43.671 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:07:43.671 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:07:43.671 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:07:43.671 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:07:43.671 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:07:43.671 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:07:43.671 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:07:43.671 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:07:43.671 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:07:43.671 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:07:43.671 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:07:43.671 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:07:43.671 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:07:43.671 element at address: 0x200013877d80 with size: 0.000244 MiB 00:07:43.671 element at address: 0x200013877e80 with size: 0.000244 MiB 00:07:43.671 element at address: 0x200013877f80 with size: 0.000244 MiB 00:07:43.671 element at address: 0x200013878080 with size: 0.000244 MiB 00:07:43.671 element at address: 0x200013878180 with size: 0.000244 MiB 00:07:43.671 element at address: 0x200013878280 with size: 0.000244 MiB 00:07:43.671 element at address: 0x200013878380 with size: 0.000244 MiB 00:07:43.671 element at address: 0x200013878480 with size: 0.000244 MiB 00:07:43.671 element at address: 0x200013878580 with size: 0.000244 MiB 00:07:43.671 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:07:43.671 element at address: 0x200019abc680 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b08fac0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b08fbc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b08fcc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b08fdc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b08fec0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b08ffc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0900c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0901c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0902c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0903c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0904c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0905c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0906c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:07:43.671 element at address: 0x200028466540 with size: 0.000244 MiB 00:07:43.671 element at address: 0x200028466640 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20002846d300 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20002846d580 with size: 0.000244 MiB 00:07:43.671 element at address: 0x20002846d680 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846d780 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846d880 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846d980 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846da80 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846db80 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846de80 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846df80 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846e080 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846e180 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846e280 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846e380 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846e480 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846e580 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846e680 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846e780 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846e880 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846e980 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846f080 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846f180 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846f280 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846f380 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846f480 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846f580 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846f680 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846f780 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846f880 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846f980 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:07:43.672 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:07:43.672 list of memzone associated elements. size: 602.264404 MiB 00:07:43.672 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:07:43.672 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:43.672 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:07:43.672 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:43.672 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:07:43.672 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_107447_0 00:07:43.672 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:07:43.672 associated memzone info: size: 48.002930 MiB name: MP_evtpool_107447_0 00:07:43.672 element at address: 0x200003fff340 with size: 48.003113 MiB 00:07:43.672 associated memzone info: size: 48.002930 MiB name: MP_msgpool_107447_0 00:07:43.672 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:07:43.672 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:43.672 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:07:43.672 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:43.672 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:07:43.672 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_107447 00:07:43.672 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:07:43.672 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_107447 00:07:43.672 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:07:43.672 associated memzone info: size: 1.007996 MiB name: MP_evtpool_107447 00:07:43.672 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:07:43.672 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:43.672 element at address: 0x200019abc780 with size: 1.008179 MiB 00:07:43.672 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:43.672 element at address: 0x200018efde00 with size: 1.008179 MiB 00:07:43.672 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:43.672 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:07:43.672 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:43.672 element at address: 0x200003eff100 with size: 1.000549 MiB 00:07:43.672 associated memzone info: size: 1.000366 MiB name: RG_ring_0_107447 00:07:43.672 element at address: 0x200003affb80 with size: 1.000549 MiB 00:07:43.672 associated memzone info: size: 1.000366 MiB name: RG_ring_1_107447 00:07:43.672 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:07:43.672 associated memzone info: size: 1.000366 MiB name: RG_ring_4_107447 00:07:43.672 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:07:43.672 associated memzone info: size: 1.000366 MiB name: RG_ring_5_107447 00:07:43.672 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:07:43.672 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_107447 00:07:43.672 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:07:43.672 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:43.672 element at address: 0x200013878680 with size: 0.500549 MiB 00:07:43.672 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:43.672 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:07:43.672 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:43.672 element at address: 0x200003adf740 with size: 0.125549 MiB 00:07:43.672 associated memzone info: size: 0.125366 MiB name: RG_ring_2_107447 00:07:43.672 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:07:43.672 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:43.672 element at address: 0x200028466740 with size: 0.023804 MiB 00:07:43.672 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:43.672 element at address: 0x200003adb500 with size: 0.016174 MiB 00:07:43.672 associated memzone info: size: 0.015991 MiB name: RG_ring_3_107447 00:07:43.672 element at address: 0x20002846c8c0 with size: 0.002502 MiB 00:07:43.672 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:43.672 element at address: 0x2000002d6b80 with size: 0.000366 MiB 00:07:43.672 associated memzone info: size: 0.000183 MiB name: MP_msgpool_107447 00:07:43.672 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:07:43.672 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_107447 00:07:43.672 element at address: 0x20002846d400 with size: 0.000366 MiB 00:07:43.672 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:43.672 07:08:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:43.672 07:08:17 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 107447 00:07:43.672 07:08:17 -- common/autotest_common.sh@924 -- # '[' -z 107447 ']' 00:07:43.672 07:08:17 -- common/autotest_common.sh@928 -- # kill -0 107447 00:07:43.672 07:08:17 -- common/autotest_common.sh@929 -- # uname 00:07:43.672 07:08:17 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:43.672 07:08:17 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 107447 00:07:43.672 killing process with pid 107447 00:07:43.672 07:08:17 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:07:43.672 07:08:17 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:07:43.672 07:08:17 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 107447' 00:07:43.672 07:08:17 -- common/autotest_common.sh@943 -- # kill 107447 00:07:43.672 07:08:17 -- common/autotest_common.sh@948 -- # wait 107447 00:07:46.206 00:07:46.206 real 0m4.516s 00:07:46.206 user 0m4.843s 00:07:46.206 sys 0m0.527s 00:07:46.206 07:08:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.206 07:08:19 -- common/autotest_common.sh@10 -- # set +x 00:07:46.206 ************************************ 00:07:46.206 END TEST dpdk_mem_utility 00:07:46.206 ************************************ 00:07:46.206 07:08:19 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:46.206 07:08:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:46.206 07:08:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:46.206 07:08:19 -- common/autotest_common.sh@10 -- # set +x 00:07:46.206 ************************************ 00:07:46.206 START TEST event 00:07:46.206 ************************************ 00:07:46.206 07:08:19 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:07:46.206 * Looking for test storage... 00:07:46.206 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:46.206 07:08:19 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:46.206 07:08:19 -- bdev/nbd_common.sh@6 -- # set -e 00:07:46.206 07:08:19 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:46.206 07:08:19 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:07:46.206 07:08:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:46.206 07:08:19 -- common/autotest_common.sh@10 -- # set +x 00:07:46.206 ************************************ 00:07:46.206 START TEST event_perf 00:07:46.206 ************************************ 00:07:46.206 07:08:19 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:46.206 Running I/O for 1 seconds...[2024-02-13 07:08:19.854460] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:07:46.206 [2024-02-13 07:08:19.854920] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107583 ] 00:07:46.464 [2024-02-13 07:08:20.048693] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.723 Running I/O for 1 seconds...[2024-02-13 07:08:20.323887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.723 [2024-02-13 07:08:20.324044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.723 [2024-02-13 07:08:20.324134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.723 [2024-02-13 07:08:20.324142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.098 00:07:48.098 lcore 0: 104093 00:07:48.098 lcore 1: 104091 00:07:48.098 lcore 2: 104091 00:07:48.098 lcore 3: 104093 00:07:48.098 done. 00:07:48.098 ************************************ 00:07:48.098 END TEST event_perf 00:07:48.098 ************************************ 00:07:48.098 00:07:48.098 real 0m1.941s 00:07:48.098 user 0m4.704s 00:07:48.098 sys 0m0.136s 00:07:48.098 07:08:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:48.098 07:08:21 -- common/autotest_common.sh@10 -- # set +x 00:07:48.358 07:08:21 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:48.358 07:08:21 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:07:48.358 07:08:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:48.358 07:08:21 -- common/autotest_common.sh@10 -- # set +x 00:07:48.358 ************************************ 00:07:48.358 START TEST event_reactor 00:07:48.358 ************************************ 00:07:48.358 07:08:21 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:07:48.358 [2024-02-13 07:08:21.838876] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:07:48.358 [2024-02-13 07:08:21.839181] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107636 ] 00:07:48.358 [2024-02-13 07:08:21.992369] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.617 [2024-02-13 07:08:22.200019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.994 test_start 00:07:49.994 oneshot 00:07:49.994 tick 100 00:07:49.994 tick 100 00:07:49.994 tick 250 00:07:49.994 tick 100 00:07:49.994 tick 100 00:07:49.994 tick 100 00:07:49.994 tick 250 00:07:49.994 tick 500 00:07:49.994 tick 100 00:07:49.994 tick 100 00:07:49.994 tick 250 00:07:49.994 tick 100 00:07:49.994 tick 100 00:07:49.994 test_end 00:07:49.994 ************************************ 00:07:49.994 END TEST event_reactor 00:07:49.994 ************************************ 00:07:49.994 00:07:49.994 real 0m1.745s 00:07:49.994 user 0m1.529s 00:07:49.994 sys 0m0.116s 00:07:49.994 07:08:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.994 07:08:23 -- common/autotest_common.sh@10 -- # set +x 00:07:49.994 07:08:23 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:49.994 07:08:23 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:07:49.994 07:08:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:49.994 07:08:23 -- common/autotest_common.sh@10 -- # set +x 00:07:49.994 ************************************ 00:07:49.994 START TEST event_reactor_perf 00:07:49.994 ************************************ 00:07:49.994 07:08:23 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:49.994 [2024-02-13 07:08:23.644012] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:07:49.994 [2024-02-13 07:08:23.644359] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107686 ] 00:07:50.254 [2024-02-13 07:08:23.807294] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.528 [2024-02-13 07:08:24.033088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.905 test_start 00:07:51.905 test_end 00:07:51.905 Performance: 357604 events per second 00:07:51.905 ************************************ 00:07:51.905 END TEST event_reactor_perf 00:07:51.905 ************************************ 00:07:51.905 00:07:51.905 real 0m1.792s 00:07:51.905 user 0m1.581s 00:07:51.905 sys 0m0.109s 00:07:51.905 07:08:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:51.905 07:08:25 -- common/autotest_common.sh@10 -- # set +x 00:07:51.905 07:08:25 -- event/event.sh@49 -- # uname -s 00:07:51.905 07:08:25 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:51.905 07:08:25 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:51.905 07:08:25 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:51.905 07:08:25 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:51.905 07:08:25 -- common/autotest_common.sh@10 -- # set +x 00:07:51.905 ************************************ 00:07:51.905 START TEST event_scheduler 00:07:51.905 ************************************ 00:07:51.905 07:08:25 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:07:51.905 * Looking for test storage... 00:07:51.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:07:51.905 07:08:25 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:51.905 07:08:25 -- scheduler/scheduler.sh@35 -- # scheduler_pid=107755 00:07:51.905 07:08:25 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:51.905 07:08:25 -- scheduler/scheduler.sh@37 -- # waitforlisten 107755 00:07:51.905 07:08:25 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:51.905 07:08:25 -- common/autotest_common.sh@817 -- # '[' -z 107755 ']' 00:07:51.905 07:08:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.905 07:08:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:51.905 07:08:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.905 07:08:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:51.905 07:08:25 -- common/autotest_common.sh@10 -- # set +x 00:07:51.905 [2024-02-13 07:08:25.593746] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:07:51.905 [2024-02-13 07:08:25.594079] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107755 ] 00:07:52.163 [2024-02-13 07:08:25.775165] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.421 [2024-02-13 07:08:26.027143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.421 [2024-02-13 07:08:26.027471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.421 [2024-02-13 07:08:26.027373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.421 [2024-02-13 07:08:26.027477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.988 07:08:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:52.988 07:08:26 -- common/autotest_common.sh@850 -- # return 0 00:07:52.988 07:08:26 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:52.988 07:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:52.988 07:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:52.988 POWER: Env isn't set yet! 00:07:52.988 POWER: Attempting to initialise ACPI cpufreq power management... 00:07:52.988 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:52.988 POWER: Cannot set governor of lcore 0 to userspace 00:07:52.988 POWER: Attempting to initialise PSTAT power management... 00:07:52.988 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:52.988 POWER: Cannot set governor of lcore 0 to performance 00:07:52.988 POWER: Attempting to initialise AMD PSTATE power management... 00:07:52.988 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:52.988 POWER: Cannot set governor of lcore 0 to userspace 00:07:52.988 POWER: Attempting to initialise CPPC power management... 00:07:52.988 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:07:52.988 POWER: Cannot set governor of lcore 0 to userspace 00:07:52.988 POWER: Attempting to initialise VM power management... 00:07:52.988 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:52.988 POWER: Unable to set Power Management Environment for lcore 0 00:07:52.988 [2024-02-13 07:08:26.603200] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:07:52.988 [2024-02-13 07:08:26.603371] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:07:52.988 [2024-02-13 07:08:26.603474] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:07:52.988 07:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:52.988 07:08:26 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:52.988 07:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:52.988 07:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.247 [2024-02-13 07:08:26.927081] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:53.247 07:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.247 07:08:26 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:53.247 07:08:26 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:53.247 07:08:26 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:53.247 07:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.506 ************************************ 00:07:53.506 START TEST scheduler_create_thread 00:07:53.506 ************************************ 00:07:53.506 07:08:26 -- common/autotest_common.sh@1102 -- # scheduler_create_thread 00:07:53.506 07:08:26 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:53.506 07:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.506 07:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.506 2 00:07:53.506 07:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.506 07:08:26 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:53.506 07:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.506 07:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.506 3 00:07:53.506 07:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.506 07:08:26 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:53.506 07:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.506 07:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.506 4 00:07:53.506 07:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.506 07:08:26 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:53.506 07:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.506 07:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.506 5 00:07:53.506 07:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.506 07:08:26 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:53.506 07:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.506 07:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.507 6 00:07:53.507 07:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.507 07:08:26 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:53.507 07:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.507 07:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.507 7 00:07:53.507 07:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.507 07:08:26 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:53.507 07:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.507 07:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.507 8 00:07:53.507 07:08:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.507 07:08:26 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:53.507 07:08:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.507 07:08:26 -- common/autotest_common.sh@10 -- # set +x 00:07:53.507 9 00:07:53.507 07:08:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.507 07:08:27 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:53.507 07:08:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.507 07:08:27 -- common/autotest_common.sh@10 -- # set +x 00:07:53.507 10 00:07:53.507 07:08:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.507 07:08:27 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:53.507 07:08:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.507 07:08:27 -- common/autotest_common.sh@10 -- # set +x 00:07:53.507 07:08:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.507 07:08:27 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:53.507 07:08:27 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:53.507 07:08:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.507 07:08:27 -- common/autotest_common.sh@10 -- # set +x 00:07:53.507 07:08:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:53.507 07:08:27 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:53.507 07:08:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:53.507 07:08:27 -- common/autotest_common.sh@10 -- # set +x 00:07:54.441 07:08:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:54.441 07:08:28 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:54.441 07:08:28 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:54.441 07:08:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:54.441 07:08:28 -- common/autotest_common.sh@10 -- # set +x 00:07:55.818 ************************************ 00:07:55.818 END TEST scheduler_create_thread 00:07:55.818 ************************************ 00:07:55.818 07:08:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:55.818 00:07:55.818 real 0m2.165s 00:07:55.818 user 0m0.015s 00:07:55.818 sys 0m0.000s 00:07:55.818 07:08:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.818 07:08:29 -- common/autotest_common.sh@10 -- # set +x 00:07:55.818 07:08:29 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:55.818 07:08:29 -- scheduler/scheduler.sh@46 -- # killprocess 107755 00:07:55.818 07:08:29 -- common/autotest_common.sh@924 -- # '[' -z 107755 ']' 00:07:55.818 07:08:29 -- common/autotest_common.sh@928 -- # kill -0 107755 00:07:55.818 07:08:29 -- common/autotest_common.sh@929 -- # uname 00:07:55.818 07:08:29 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:07:55.818 07:08:29 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 107755 00:07:55.818 killing process with pid 107755 00:07:55.818 07:08:29 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:07:55.818 07:08:29 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:07:55.818 07:08:29 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 107755' 00:07:55.818 07:08:29 -- common/autotest_common.sh@943 -- # kill 107755 00:07:55.818 07:08:29 -- common/autotest_common.sh@948 -- # wait 107755 00:07:56.077 [2024-02-13 07:08:29.587146] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:57.454 ************************************ 00:07:57.454 END TEST event_scheduler 00:07:57.454 ************************************ 00:07:57.454 00:07:57.454 real 0m5.413s 00:07:57.454 user 0m9.019s 00:07:57.454 sys 0m0.462s 00:07:57.454 07:08:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.454 07:08:30 -- common/autotest_common.sh@10 -- # set +x 00:07:57.454 07:08:30 -- event/event.sh@51 -- # modprobe -n nbd 00:07:57.454 07:08:30 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:57.454 07:08:30 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:07:57.454 07:08:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:07:57.454 07:08:30 -- common/autotest_common.sh@10 -- # set +x 00:07:57.454 ************************************ 00:07:57.454 START TEST app_repeat 00:07:57.454 ************************************ 00:07:57.454 07:08:30 -- common/autotest_common.sh@1102 -- # app_repeat_test 00:07:57.454 07:08:30 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.454 07:08:30 -- event/event.sh@13 -- # nbd_list=("/dev/nbd0" "/dev/nbd1") 00:07:57.454 07:08:30 -- event/event.sh@13 -- # local nbd_list 00:07:57.454 07:08:30 -- event/event.sh@14 -- # bdev_list=("Malloc0" "Malloc1") 00:07:57.454 07:08:30 -- event/event.sh@14 -- # local bdev_list 00:07:57.454 07:08:30 -- event/event.sh@15 -- # local repeat_times=4 00:07:57.454 07:08:30 -- event/event.sh@17 -- # modprobe nbd 00:07:57.454 07:08:30 -- event/event.sh@19 -- # repeat_pid=107904 00:07:57.454 07:08:30 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:57.454 Process app_repeat pid: 107904 00:07:57.454 07:08:30 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:57.454 07:08:30 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 107904' 00:07:57.454 07:08:30 -- event/event.sh@23 -- # for i in {0..2} 00:07:57.454 spdk_app_start Round 0 00:07:57.454 07:08:30 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:57.454 07:08:30 -- event/event.sh@25 -- # waitforlisten 107904 /var/tmp/spdk-nbd.sock 00:07:57.454 07:08:30 -- common/autotest_common.sh@817 -- # '[' -z 107904 ']' 00:07:57.454 07:08:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:57.454 07:08:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:57.454 07:08:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:57.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:57.454 07:08:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:57.454 07:08:30 -- common/autotest_common.sh@10 -- # set +x 00:07:57.454 [2024-02-13 07:08:30.985245] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:07:57.454 [2024-02-13 07:08:30.985668] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107904 ] 00:07:57.713 [2024-02-13 07:08:31.165995] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:57.972 [2024-02-13 07:08:31.429773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.972 [2024-02-13 07:08:31.429789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.540 07:08:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:58.540 07:08:31 -- common/autotest_common.sh@850 -- # return 0 00:07:58.540 07:08:31 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:58.799 Malloc0 00:07:58.799 07:08:32 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:59.058 Malloc1 00:07:59.058 07:08:32 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:59.058 07:08:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.058 07:08:32 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:07:59.058 07:08:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:59.058 07:08:32 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:07:59.058 07:08:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:59.058 07:08:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:59.058 07:08:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.058 07:08:32 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:07:59.058 07:08:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:59.058 07:08:32 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:07:59.058 07:08:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:59.058 07:08:32 -- bdev/nbd_common.sh@12 -- # local i 00:07:59.058 07:08:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:59.058 07:08:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:59.058 07:08:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:59.316 /dev/nbd0 00:07:59.316 07:08:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:59.316 07:08:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:59.316 07:08:32 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:07:59.316 07:08:32 -- common/autotest_common.sh@855 -- # local i 00:07:59.316 07:08:32 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:07:59.316 07:08:32 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:07:59.316 07:08:32 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:07:59.316 07:08:32 -- common/autotest_common.sh@859 -- # break 00:07:59.316 07:08:32 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:59.316 07:08:32 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:59.316 07:08:32 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:59.316 1+0 records in 00:07:59.316 1+0 records out 00:07:59.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526006 s, 7.8 MB/s 00:07:59.316 07:08:32 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:59.316 07:08:32 -- common/autotest_common.sh@872 -- # size=4096 00:07:59.316 07:08:32 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:59.316 07:08:32 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:07:59.316 07:08:32 -- common/autotest_common.sh@875 -- # return 0 00:07:59.316 07:08:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.316 07:08:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:59.316 07:08:32 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:59.883 /dev/nbd1 00:07:59.883 07:08:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:59.883 07:08:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:59.883 07:08:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:07:59.883 07:08:33 -- common/autotest_common.sh@855 -- # local i 00:07:59.883 07:08:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:07:59.883 07:08:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:07:59.883 07:08:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:07:59.883 07:08:33 -- common/autotest_common.sh@859 -- # break 00:07:59.883 07:08:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:07:59.883 07:08:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:07:59.883 07:08:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:59.883 1+0 records in 00:07:59.883 1+0 records out 00:07:59.883 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418229 s, 9.8 MB/s 00:07:59.883 07:08:33 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:59.883 07:08:33 -- common/autotest_common.sh@872 -- # size=4096 00:07:59.883 07:08:33 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:59.883 07:08:33 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:07:59.883 07:08:33 -- common/autotest_common.sh@875 -- # return 0 00:07:59.883 07:08:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.883 07:08:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:59.883 07:08:33 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:59.883 07:08:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.883 07:08:33 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:00.143 { 00:08:00.143 "nbd_device": "/dev/nbd0", 00:08:00.143 "bdev_name": "Malloc0" 00:08:00.143 }, 00:08:00.143 { 00:08:00.143 "nbd_device": "/dev/nbd1", 00:08:00.143 "bdev_name": "Malloc1" 00:08:00.143 } 00:08:00.143 ]' 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:00.143 { 00:08:00.143 "nbd_device": "/dev/nbd0", 00:08:00.143 "bdev_name": "Malloc0" 00:08:00.143 }, 00:08:00.143 { 00:08:00.143 "nbd_device": "/dev/nbd1", 00:08:00.143 "bdev_name": "Malloc1" 00:08:00.143 } 00:08:00.143 ]' 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:00.143 /dev/nbd1' 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:00.143 /dev/nbd1' 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@65 -- # count=2 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@95 -- # count=2 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:00.143 256+0 records in 00:08:00.143 256+0 records out 00:08:00.143 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00881162 s, 119 MB/s 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:00.143 256+0 records in 00:08:00.143 256+0 records out 00:08:00.143 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304806 s, 34.4 MB/s 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:00.143 256+0 records in 00:08:00.143 256+0 records out 00:08:00.143 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0336719 s, 31.1 MB/s 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@51 -- # local i 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.143 07:08:33 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:00.401 07:08:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:00.401 07:08:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:00.401 07:08:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:00.401 07:08:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.401 07:08:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.401 07:08:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:00.401 07:08:34 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:08:00.659 07:08:34 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:08:00.659 07:08:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.659 07:08:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:00.659 07:08:34 -- bdev/nbd_common.sh@41 -- # break 00:08:00.659 07:08:34 -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.659 07:08:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.659 07:08:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:00.917 07:08:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:00.917 07:08:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:00.917 07:08:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:00.917 07:08:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.917 07:08:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.917 07:08:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:00.917 07:08:34 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:08:00.917 07:08:34 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:08:00.917 07:08:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.917 07:08:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:00.917 07:08:34 -- bdev/nbd_common.sh@41 -- # break 00:08:00.917 07:08:34 -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.917 07:08:34 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:00.917 07:08:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.917 07:08:34 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:01.175 07:08:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:01.175 07:08:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:01.175 07:08:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:01.435 07:08:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:01.435 07:08:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:01.435 07:08:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:01.435 07:08:34 -- bdev/nbd_common.sh@65 -- # true 00:08:01.435 07:08:34 -- bdev/nbd_common.sh@65 -- # count=0 00:08:01.435 07:08:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:01.435 07:08:34 -- bdev/nbd_common.sh@104 -- # count=0 00:08:01.435 07:08:34 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:01.435 07:08:34 -- bdev/nbd_common.sh@109 -- # return 0 00:08:01.435 07:08:34 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:01.694 07:08:35 -- event/event.sh@35 -- # sleep 3 00:08:03.072 [2024-02-13 07:08:36.506794] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:03.072 [2024-02-13 07:08:36.728415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.072 [2024-02-13 07:08:36.728414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.332 [2024-02-13 07:08:36.938237] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:03.332 [2024-02-13 07:08:36.938707] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:04.709 spdk_app_start Round 1 00:08:04.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:04.709 07:08:38 -- event/event.sh@23 -- # for i in {0..2} 00:08:04.709 07:08:38 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:04.709 07:08:38 -- event/event.sh@25 -- # waitforlisten 107904 /var/tmp/spdk-nbd.sock 00:08:04.709 07:08:38 -- common/autotest_common.sh@817 -- # '[' -z 107904 ']' 00:08:04.709 07:08:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:04.709 07:08:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:04.709 07:08:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:04.709 07:08:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:04.709 07:08:38 -- common/autotest_common.sh@10 -- # set +x 00:08:04.968 07:08:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:04.968 07:08:38 -- common/autotest_common.sh@850 -- # return 0 00:08:04.968 07:08:38 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:05.537 Malloc0 00:08:05.537 07:08:38 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:05.795 Malloc1 00:08:05.795 07:08:39 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:05.795 07:08:39 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.795 07:08:39 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:08:05.795 07:08:39 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:05.795 07:08:39 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:08:05.795 07:08:39 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:05.795 07:08:39 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:05.795 07:08:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:05.795 07:08:39 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:08:05.795 07:08:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:05.795 07:08:39 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:08:05.795 07:08:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:05.795 07:08:39 -- bdev/nbd_common.sh@12 -- # local i 00:08:05.795 07:08:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:05.795 07:08:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:05.795 07:08:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:06.054 /dev/nbd0 00:08:06.054 07:08:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:06.054 07:08:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:06.054 07:08:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:08:06.054 07:08:39 -- common/autotest_common.sh@855 -- # local i 00:08:06.054 07:08:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:06.054 07:08:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:06.054 07:08:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:08:06.054 07:08:39 -- common/autotest_common.sh@859 -- # break 00:08:06.054 07:08:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:06.054 07:08:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:06.054 07:08:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:06.054 1+0 records in 00:08:06.054 1+0 records out 00:08:06.054 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478421 s, 8.6 MB/s 00:08:06.054 07:08:39 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:06.054 07:08:39 -- common/autotest_common.sh@872 -- # size=4096 00:08:06.054 07:08:39 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:06.054 07:08:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:06.054 07:08:39 -- common/autotest_common.sh@875 -- # return 0 00:08:06.054 07:08:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.054 07:08:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:06.054 07:08:39 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:06.314 /dev/nbd1 00:08:06.314 07:08:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:06.314 07:08:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:06.314 07:08:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:08:06.314 07:08:39 -- common/autotest_common.sh@855 -- # local i 00:08:06.314 07:08:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:06.314 07:08:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:06.314 07:08:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:08:06.314 07:08:39 -- common/autotest_common.sh@859 -- # break 00:08:06.314 07:08:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:06.314 07:08:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:06.314 07:08:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:06.314 1+0 records in 00:08:06.314 1+0 records out 00:08:06.314 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585426 s, 7.0 MB/s 00:08:06.314 07:08:39 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:06.314 07:08:39 -- common/autotest_common.sh@872 -- # size=4096 00:08:06.314 07:08:39 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:06.314 07:08:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:06.314 07:08:39 -- common/autotest_common.sh@875 -- # return 0 00:08:06.314 07:08:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:06.314 07:08:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:06.314 07:08:39 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.314 07:08:39 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.314 07:08:39 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.573 07:08:40 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:06.573 { 00:08:06.573 "nbd_device": "/dev/nbd0", 00:08:06.573 "bdev_name": "Malloc0" 00:08:06.573 }, 00:08:06.573 { 00:08:06.573 "nbd_device": "/dev/nbd1", 00:08:06.573 "bdev_name": "Malloc1" 00:08:06.573 } 00:08:06.573 ]' 00:08:06.573 07:08:40 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:06.573 { 00:08:06.573 "nbd_device": "/dev/nbd0", 00:08:06.573 "bdev_name": "Malloc0" 00:08:06.573 }, 00:08:06.573 { 00:08:06.573 "nbd_device": "/dev/nbd1", 00:08:06.573 "bdev_name": "Malloc1" 00:08:06.573 } 00:08:06.573 ]' 00:08:06.573 07:08:40 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.573 07:08:40 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:06.573 /dev/nbd1' 00:08:06.573 07:08:40 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:06.573 /dev/nbd1' 00:08:06.573 07:08:40 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.573 07:08:40 -- bdev/nbd_common.sh@65 -- # count=2 00:08:06.573 07:08:40 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:06.573 07:08:40 -- bdev/nbd_common.sh@95 -- # count=2 00:08:06.573 07:08:40 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:06.573 07:08:40 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:06.573 07:08:40 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:08:06.573 07:08:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:06.573 07:08:40 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:06.573 07:08:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:06.573 07:08:40 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:06.573 07:08:40 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:06.832 256+0 records in 00:08:06.832 256+0 records out 00:08:06.832 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00931081 s, 113 MB/s 00:08:06.832 07:08:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.832 07:08:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:06.832 256+0 records in 00:08:06.832 256+0 records out 00:08:06.832 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306718 s, 34.2 MB/s 00:08:06.832 07:08:40 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:06.832 07:08:40 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:06.832 256+0 records in 00:08:06.832 256+0 records out 00:08:06.832 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0352439 s, 29.8 MB/s 00:08:06.832 07:08:40 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:06.832 07:08:40 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:08:06.832 07:08:40 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:06.832 07:08:40 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:06.832 07:08:40 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:06.832 07:08:40 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:06.832 07:08:40 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:06.832 07:08:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.833 07:08:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:06.833 07:08:40 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:06.833 07:08:40 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:06.833 07:08:40 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:06.833 07:08:40 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:06.833 07:08:40 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.833 07:08:40 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:08:06.833 07:08:40 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:06.833 07:08:40 -- bdev/nbd_common.sh@51 -- # local i 00:08:06.833 07:08:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.833 07:08:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:07.091 07:08:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:07.091 07:08:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:07.091 07:08:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:07.091 07:08:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.091 07:08:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.091 07:08:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:07.091 07:08:40 -- bdev/nbd_common.sh@41 -- # break 00:08:07.091 07:08:40 -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.091 07:08:40 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.091 07:08:40 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:07.350 07:08:41 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:07.350 07:08:41 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:07.350 07:08:41 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:07.350 07:08:41 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:07.350 07:08:41 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:07.350 07:08:41 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:07.350 07:08:41 -- bdev/nbd_common.sh@41 -- # break 00:08:07.350 07:08:41 -- bdev/nbd_common.sh@45 -- # return 0 00:08:07.350 07:08:41 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:07.350 07:08:41 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.350 07:08:41 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:07.608 07:08:41 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:07.866 07:08:41 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:07.866 07:08:41 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:07.866 07:08:41 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:07.866 07:08:41 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:07.866 07:08:41 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:07.866 07:08:41 -- bdev/nbd_common.sh@65 -- # true 00:08:07.866 07:08:41 -- bdev/nbd_common.sh@65 -- # count=0 00:08:07.866 07:08:41 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:07.866 07:08:41 -- bdev/nbd_common.sh@104 -- # count=0 00:08:07.867 07:08:41 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:07.867 07:08:41 -- bdev/nbd_common.sh@109 -- # return 0 00:08:07.867 07:08:41 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:08.128 07:08:41 -- event/event.sh@35 -- # sleep 3 00:08:09.557 [2024-02-13 07:08:43.092648] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:09.816 [2024-02-13 07:08:43.318852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.816 [2024-02-13 07:08:43.318852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.074 [2024-02-13 07:08:43.512755] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:10.074 [2024-02-13 07:08:43.512873] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:11.448 spdk_app_start Round 2 00:08:11.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:11.448 07:08:44 -- event/event.sh@23 -- # for i in {0..2} 00:08:11.448 07:08:44 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:11.448 07:08:44 -- event/event.sh@25 -- # waitforlisten 107904 /var/tmp/spdk-nbd.sock 00:08:11.448 07:08:44 -- common/autotest_common.sh@817 -- # '[' -z 107904 ']' 00:08:11.448 07:08:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:11.448 07:08:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:11.448 07:08:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:11.448 07:08:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:11.448 07:08:44 -- common/autotest_common.sh@10 -- # set +x 00:08:11.448 07:08:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:11.448 07:08:45 -- common/autotest_common.sh@850 -- # return 0 00:08:11.448 07:08:45 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:12.013 Malloc0 00:08:12.013 07:08:45 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:12.271 Malloc1 00:08:12.271 07:08:45 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:12.271 07:08:45 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.271 07:08:45 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:08:12.271 07:08:45 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:12.272 07:08:45 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:08:12.272 07:08:45 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:12.272 07:08:45 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:12.272 07:08:45 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.272 07:08:45 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:08:12.272 07:08:45 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:12.272 07:08:45 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:08:12.272 07:08:45 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:12.272 07:08:45 -- bdev/nbd_common.sh@12 -- # local i 00:08:12.272 07:08:45 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:12.272 07:08:45 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:12.272 07:08:45 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:12.530 /dev/nbd0 00:08:12.530 07:08:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:12.530 07:08:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:12.530 07:08:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:08:12.530 07:08:46 -- common/autotest_common.sh@855 -- # local i 00:08:12.530 07:08:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:12.530 07:08:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:12.530 07:08:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:08:12.530 07:08:46 -- common/autotest_common.sh@859 -- # break 00:08:12.530 07:08:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:12.530 07:08:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:12.530 07:08:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:12.530 1+0 records in 00:08:12.530 1+0 records out 00:08:12.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273166 s, 15.0 MB/s 00:08:12.530 07:08:46 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:12.530 07:08:46 -- common/autotest_common.sh@872 -- # size=4096 00:08:12.530 07:08:46 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:12.530 07:08:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:12.530 07:08:46 -- common/autotest_common.sh@875 -- # return 0 00:08:12.530 07:08:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:12.530 07:08:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:12.530 07:08:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:12.788 /dev/nbd1 00:08:12.788 07:08:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:12.788 07:08:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:12.788 07:08:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:08:12.788 07:08:46 -- common/autotest_common.sh@855 -- # local i 00:08:12.788 07:08:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:08:12.788 07:08:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:08:12.788 07:08:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:08:12.788 07:08:46 -- common/autotest_common.sh@859 -- # break 00:08:12.788 07:08:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:08:12.788 07:08:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:08:12.788 07:08:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:12.788 1+0 records in 00:08:12.788 1+0 records out 00:08:12.788 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367106 s, 11.2 MB/s 00:08:12.788 07:08:46 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:12.788 07:08:46 -- common/autotest_common.sh@872 -- # size=4096 00:08:12.788 07:08:46 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:12.788 07:08:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:08:12.788 07:08:46 -- common/autotest_common.sh@875 -- # return 0 00:08:12.788 07:08:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:12.788 07:08:46 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:12.788 07:08:46 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:12.788 07:08:46 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.788 07:08:46 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:13.381 { 00:08:13.381 "nbd_device": "/dev/nbd0", 00:08:13.381 "bdev_name": "Malloc0" 00:08:13.381 }, 00:08:13.381 { 00:08:13.381 "nbd_device": "/dev/nbd1", 00:08:13.381 "bdev_name": "Malloc1" 00:08:13.381 } 00:08:13.381 ]' 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:13.381 { 00:08:13.381 "nbd_device": "/dev/nbd0", 00:08:13.381 "bdev_name": "Malloc0" 00:08:13.381 }, 00:08:13.381 { 00:08:13.381 "nbd_device": "/dev/nbd1", 00:08:13.381 "bdev_name": "Malloc1" 00:08:13.381 } 00:08:13.381 ]' 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:13.381 /dev/nbd1' 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:13.381 /dev/nbd1' 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@65 -- # count=2 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@66 -- # echo 2 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@95 -- # count=2 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:13.381 256+0 records in 00:08:13.381 256+0 records out 00:08:13.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00589031 s, 178 MB/s 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:13.381 256+0 records in 00:08:13.381 256+0 records out 00:08:13.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0294441 s, 35.6 MB/s 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:13.381 256+0 records in 00:08:13.381 256+0 records out 00:08:13.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0413528 s, 25.4 MB/s 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@51 -- # local i 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.381 07:08:46 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:13.639 07:08:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:13.639 07:08:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:13.639 07:08:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:13.639 07:08:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.639 07:08:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.639 07:08:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:13.639 07:08:47 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:08:13.639 07:08:47 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:08:13.639 07:08:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.639 07:08:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:13.639 07:08:47 -- bdev/nbd_common.sh@41 -- # break 00:08:13.640 07:08:47 -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.640 07:08:47 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.640 07:08:47 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:13.898 07:08:47 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:13.898 07:08:47 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:13.898 07:08:47 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:13.898 07:08:47 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.898 07:08:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.898 07:08:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:13.898 07:08:47 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:08:14.156 07:08:47 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:08:14.156 07:08:47 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.156 07:08:47 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:14.156 07:08:47 -- bdev/nbd_common.sh@41 -- # break 00:08:14.156 07:08:47 -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.156 07:08:47 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:14.156 07:08:47 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.156 07:08:47 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:14.414 07:08:47 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:14.414 07:08:47 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:14.414 07:08:47 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:14.414 07:08:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:14.414 07:08:48 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:14.414 07:08:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:14.414 07:08:48 -- bdev/nbd_common.sh@65 -- # true 00:08:14.414 07:08:48 -- bdev/nbd_common.sh@65 -- # count=0 00:08:14.414 07:08:48 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:14.414 07:08:48 -- bdev/nbd_common.sh@104 -- # count=0 00:08:14.414 07:08:48 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:14.414 07:08:48 -- bdev/nbd_common.sh@109 -- # return 0 00:08:14.414 07:08:48 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:14.979 07:08:48 -- event/event.sh@35 -- # sleep 3 00:08:16.355 [2024-02-13 07:08:49.804053] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.355 [2024-02-13 07:08:50.018484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.355 [2024-02-13 07:08:50.018483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.613 [2024-02-13 07:08:50.209060] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:16.613 [2024-02-13 07:08:50.209211] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:17.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:17.989 07:08:51 -- event/event.sh@38 -- # waitforlisten 107904 /var/tmp/spdk-nbd.sock 00:08:17.989 07:08:51 -- common/autotest_common.sh@817 -- # '[' -z 107904 ']' 00:08:17.989 07:08:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:17.989 07:08:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:17.989 07:08:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:17.989 07:08:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:17.989 07:08:51 -- common/autotest_common.sh@10 -- # set +x 00:08:18.248 07:08:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:18.248 07:08:51 -- common/autotest_common.sh@850 -- # return 0 00:08:18.248 07:08:51 -- event/event.sh@39 -- # killprocess 107904 00:08:18.248 07:08:51 -- common/autotest_common.sh@924 -- # '[' -z 107904 ']' 00:08:18.248 07:08:51 -- common/autotest_common.sh@928 -- # kill -0 107904 00:08:18.248 07:08:51 -- common/autotest_common.sh@929 -- # uname 00:08:18.248 07:08:51 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:08:18.248 07:08:51 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 107904 00:08:18.248 killing process with pid 107904 00:08:18.248 07:08:51 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:08:18.248 07:08:51 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:08:18.248 07:08:51 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 107904' 00:08:18.248 07:08:51 -- common/autotest_common.sh@943 -- # kill 107904 00:08:18.248 07:08:51 -- common/autotest_common.sh@948 -- # wait 107904 00:08:19.653 spdk_app_start is called in Round 0. 00:08:19.653 Shutdown signal received, stop current app iteration 00:08:19.653 Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 reinitialization... 00:08:19.653 spdk_app_start is called in Round 1. 00:08:19.653 Shutdown signal received, stop current app iteration 00:08:19.653 Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 reinitialization... 00:08:19.653 spdk_app_start is called in Round 2. 00:08:19.653 Shutdown signal received, stop current app iteration 00:08:19.653 Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 reinitialization... 00:08:19.653 spdk_app_start is called in Round 3. 00:08:19.653 Shutdown signal received, stop current app iteration 00:08:19.653 ************************************ 00:08:19.653 END TEST app_repeat 00:08:19.653 ************************************ 00:08:19.653 07:08:52 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:19.653 07:08:52 -- event/event.sh@42 -- # return 0 00:08:19.653 00:08:19.653 real 0m22.002s 00:08:19.653 user 0m47.470s 00:08:19.653 sys 0m3.128s 00:08:19.653 07:08:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:19.653 07:08:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.653 07:08:52 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:19.653 07:08:52 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:19.653 07:08:52 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:08:19.653 07:08:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:19.653 07:08:52 -- common/autotest_common.sh@10 -- # set +x 00:08:19.653 ************************************ 00:08:19.653 START TEST cpu_locks 00:08:19.653 ************************************ 00:08:19.653 07:08:52 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:19.653 * Looking for test storage... 00:08:19.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:19.653 07:08:53 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:19.653 07:08:53 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:19.653 07:08:53 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:19.653 07:08:53 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:19.653 07:08:53 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:08:19.653 07:08:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:19.653 07:08:53 -- common/autotest_common.sh@10 -- # set +x 00:08:19.653 ************************************ 00:08:19.653 START TEST default_locks 00:08:19.653 ************************************ 00:08:19.653 07:08:53 -- common/autotest_common.sh@1102 -- # default_locks 00:08:19.653 07:08:53 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=108491 00:08:19.653 07:08:53 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:19.653 07:08:53 -- event/cpu_locks.sh@47 -- # waitforlisten 108491 00:08:19.653 07:08:53 -- common/autotest_common.sh@817 -- # '[' -z 108491 ']' 00:08:19.653 07:08:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.653 07:08:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:19.653 07:08:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.653 07:08:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:19.653 07:08:53 -- common/autotest_common.sh@10 -- # set +x 00:08:19.653 [2024-02-13 07:08:53.135496] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:08:19.653 [2024-02-13 07:08:53.135677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108491 ] 00:08:19.653 [2024-02-13 07:08:53.291224] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.913 [2024-02-13 07:08:53.504898] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:19.913 [2024-02-13 07:08:53.505237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.291 07:08:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:21.291 07:08:54 -- common/autotest_common.sh@850 -- # return 0 00:08:21.291 07:08:54 -- event/cpu_locks.sh@49 -- # locks_exist 108491 00:08:21.291 07:08:54 -- event/cpu_locks.sh@22 -- # lslocks -p 108491 00:08:21.291 07:08:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:21.291 07:08:54 -- event/cpu_locks.sh@50 -- # killprocess 108491 00:08:21.291 07:08:54 -- common/autotest_common.sh@924 -- # '[' -z 108491 ']' 00:08:21.291 07:08:54 -- common/autotest_common.sh@928 -- # kill -0 108491 00:08:21.291 07:08:54 -- common/autotest_common.sh@929 -- # uname 00:08:21.291 07:08:54 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:08:21.291 07:08:54 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 108491 00:08:21.291 07:08:54 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:08:21.291 killing process with pid 108491 00:08:21.291 07:08:54 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:08:21.291 07:08:54 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 108491' 00:08:21.291 07:08:54 -- common/autotest_common.sh@943 -- # kill 108491 00:08:21.291 07:08:54 -- common/autotest_common.sh@948 -- # wait 108491 00:08:23.826 07:08:57 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 108491 00:08:23.826 07:08:57 -- common/autotest_common.sh@638 -- # local es=0 00:08:23.826 07:08:57 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 108491 00:08:23.826 07:08:57 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:08:23.826 07:08:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:23.826 07:08:57 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:08:23.826 07:08:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:23.826 07:08:57 -- common/autotest_common.sh@641 -- # waitforlisten 108491 00:08:23.826 07:08:57 -- common/autotest_common.sh@817 -- # '[' -z 108491 ']' 00:08:23.826 07:08:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.826 07:08:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:23.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.826 07:08:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.826 07:08:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:23.826 07:08:57 -- common/autotest_common.sh@10 -- # set +x 00:08:23.826 ERROR: process (pid: 108491) is no longer running 00:08:23.826 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (108491) - No such process 00:08:23.826 07:08:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:23.826 07:08:57 -- common/autotest_common.sh@850 -- # return 1 00:08:23.826 07:08:57 -- common/autotest_common.sh@641 -- # es=1 00:08:23.826 07:08:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:23.826 07:08:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:23.826 07:08:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:23.826 07:08:57 -- event/cpu_locks.sh@54 -- # no_locks 00:08:23.826 07:08:57 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:08:23.826 07:08:57 -- event/cpu_locks.sh@26 -- # local lock_files 00:08:23.826 07:08:57 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:23.826 00:08:23.826 real 0m3.973s 00:08:23.826 user 0m4.051s 00:08:23.826 sys 0m0.679s 00:08:23.826 07:08:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:23.826 ************************************ 00:08:23.826 END TEST default_locks 00:08:23.826 ************************************ 00:08:23.826 07:08:57 -- common/autotest_common.sh@10 -- # set +x 00:08:23.826 07:08:57 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:23.826 07:08:57 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:08:23.826 07:08:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:23.826 07:08:57 -- common/autotest_common.sh@10 -- # set +x 00:08:23.826 ************************************ 00:08:23.826 START TEST default_locks_via_rpc 00:08:23.826 ************************************ 00:08:23.826 07:08:57 -- common/autotest_common.sh@1102 -- # default_locks_via_rpc 00:08:23.826 07:08:57 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=108566 00:08:23.826 07:08:57 -- event/cpu_locks.sh@63 -- # waitforlisten 108566 00:08:23.826 07:08:57 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:23.826 07:08:57 -- common/autotest_common.sh@817 -- # '[' -z 108566 ']' 00:08:23.826 07:08:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.826 07:08:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:23.826 07:08:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.826 07:08:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:23.826 07:08:57 -- common/autotest_common.sh@10 -- # set +x 00:08:23.827 [2024-02-13 07:08:57.176429] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:08:23.827 [2024-02-13 07:08:57.176817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108566 ] 00:08:23.827 [2024-02-13 07:08:57.346243] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.086 [2024-02-13 07:08:57.552556] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:24.086 [2024-02-13 07:08:57.552851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.464 07:08:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:25.464 07:08:58 -- common/autotest_common.sh@850 -- # return 0 00:08:25.464 07:08:58 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:25.464 07:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:25.464 07:08:58 -- common/autotest_common.sh@10 -- # set +x 00:08:25.464 07:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:25.464 07:08:58 -- event/cpu_locks.sh@67 -- # no_locks 00:08:25.464 07:08:58 -- event/cpu_locks.sh@26 -- # lock_files=(/var/tmp/spdk_cpu_lock*) 00:08:25.464 07:08:58 -- event/cpu_locks.sh@26 -- # local lock_files 00:08:25.464 07:08:58 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:25.464 07:08:58 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:25.464 07:08:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:25.464 07:08:58 -- common/autotest_common.sh@10 -- # set +x 00:08:25.464 07:08:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:25.464 07:08:58 -- event/cpu_locks.sh@71 -- # locks_exist 108566 00:08:25.464 07:08:58 -- event/cpu_locks.sh@22 -- # lslocks -p 108566 00:08:25.464 07:08:58 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:25.464 07:08:59 -- event/cpu_locks.sh@73 -- # killprocess 108566 00:08:25.464 07:08:59 -- common/autotest_common.sh@924 -- # '[' -z 108566 ']' 00:08:25.464 07:08:59 -- common/autotest_common.sh@928 -- # kill -0 108566 00:08:25.464 07:08:59 -- common/autotest_common.sh@929 -- # uname 00:08:25.464 07:08:59 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:08:25.464 07:08:59 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 108566 00:08:25.464 07:08:59 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:08:25.464 killing process with pid 108566 00:08:25.464 07:08:59 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:08:25.464 07:08:59 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 108566' 00:08:25.464 07:08:59 -- common/autotest_common.sh@943 -- # kill 108566 00:08:25.464 07:08:59 -- common/autotest_common.sh@948 -- # wait 108566 00:08:27.998 00:08:27.998 real 0m4.149s 00:08:27.998 user 0m4.237s 00:08:27.998 sys 0m0.701s 00:08:27.998 07:09:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:27.998 07:09:01 -- common/autotest_common.sh@10 -- # set +x 00:08:27.998 ************************************ 00:08:27.998 END TEST default_locks_via_rpc 00:08:27.998 ************************************ 00:08:27.998 07:09:01 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:27.998 07:09:01 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:08:27.998 07:09:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:27.998 07:09:01 -- common/autotest_common.sh@10 -- # set +x 00:08:27.998 ************************************ 00:08:27.998 START TEST non_locking_app_on_locked_coremask 00:08:27.998 ************************************ 00:08:27.998 07:09:01 -- common/autotest_common.sh@1102 -- # non_locking_app_on_locked_coremask 00:08:27.998 07:09:01 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=108673 00:08:27.998 07:09:01 -- event/cpu_locks.sh@81 -- # waitforlisten 108673 /var/tmp/spdk.sock 00:08:27.998 07:09:01 -- common/autotest_common.sh@817 -- # '[' -z 108673 ']' 00:08:27.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.998 07:09:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.998 07:09:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:27.998 07:09:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.998 07:09:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:27.998 07:09:01 -- common/autotest_common.sh@10 -- # set +x 00:08:27.998 07:09:01 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:27.998 [2024-02-13 07:09:01.384910] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:08:27.998 [2024-02-13 07:09:01.386988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108673 ] 00:08:27.998 [2024-02-13 07:09:01.552646] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.257 [2024-02-13 07:09:01.753680] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:28.257 [2024-02-13 07:09:01.753979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.634 07:09:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:29.634 07:09:03 -- common/autotest_common.sh@850 -- # return 0 00:08:29.634 07:09:03 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=108701 00:08:29.634 07:09:03 -- event/cpu_locks.sh@85 -- # waitforlisten 108701 /var/tmp/spdk2.sock 00:08:29.634 07:09:03 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:29.634 07:09:03 -- common/autotest_common.sh@817 -- # '[' -z 108701 ']' 00:08:29.634 07:09:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:29.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:29.634 07:09:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:29.635 07:09:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:29.635 07:09:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:29.635 07:09:03 -- common/autotest_common.sh@10 -- # set +x 00:08:29.635 [2024-02-13 07:09:03.117300] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:08:29.635 [2024-02-13 07:09:03.117518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108701 ] 00:08:29.635 [2024-02-13 07:09:03.274857] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:29.635 [2024-02-13 07:09:03.274982] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.202 [2024-02-13 07:09:03.735986] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:30.202 [2024-02-13 07:09:03.736262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.106 07:09:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:32.106 07:09:05 -- common/autotest_common.sh@850 -- # return 0 00:08:32.106 07:09:05 -- event/cpu_locks.sh@87 -- # locks_exist 108673 00:08:32.106 07:09:05 -- event/cpu_locks.sh@22 -- # lslocks -p 108673 00:08:32.106 07:09:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:32.364 07:09:05 -- event/cpu_locks.sh@89 -- # killprocess 108673 00:08:32.364 07:09:05 -- common/autotest_common.sh@924 -- # '[' -z 108673 ']' 00:08:32.364 07:09:05 -- common/autotest_common.sh@928 -- # kill -0 108673 00:08:32.364 07:09:05 -- common/autotest_common.sh@929 -- # uname 00:08:32.364 07:09:05 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:08:32.364 07:09:05 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 108673 00:08:32.364 07:09:05 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:08:32.364 killing process with pid 108673 00:08:32.364 07:09:05 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:08:32.364 07:09:05 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 108673' 00:08:32.364 07:09:05 -- common/autotest_common.sh@943 -- # kill 108673 00:08:32.364 07:09:05 -- common/autotest_common.sh@948 -- # wait 108673 00:08:37.649 07:09:10 -- event/cpu_locks.sh@90 -- # killprocess 108701 00:08:37.649 07:09:10 -- common/autotest_common.sh@924 -- # '[' -z 108701 ']' 00:08:37.649 07:09:10 -- common/autotest_common.sh@928 -- # kill -0 108701 00:08:37.649 07:09:10 -- common/autotest_common.sh@929 -- # uname 00:08:37.649 07:09:10 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:08:37.649 07:09:10 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 108701 00:08:37.649 07:09:10 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:08:37.649 07:09:10 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:08:37.649 07:09:10 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 108701' 00:08:37.649 killing process with pid 108701 00:08:37.649 07:09:10 -- common/autotest_common.sh@943 -- # kill 108701 00:08:37.649 07:09:10 -- common/autotest_common.sh@948 -- # wait 108701 00:08:39.026 00:08:39.026 real 0m11.249s 00:08:39.026 user 0m11.826s 00:08:39.026 sys 0m1.453s 00:08:39.026 07:09:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:39.026 07:09:12 -- common/autotest_common.sh@10 -- # set +x 00:08:39.026 ************************************ 00:08:39.026 END TEST non_locking_app_on_locked_coremask 00:08:39.026 ************************************ 00:08:39.026 07:09:12 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:39.026 07:09:12 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:08:39.026 07:09:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:39.026 07:09:12 -- common/autotest_common.sh@10 -- # set +x 00:08:39.026 ************************************ 00:08:39.026 START TEST locking_app_on_unlocked_coremask 00:08:39.026 ************************************ 00:08:39.026 07:09:12 -- common/autotest_common.sh@1102 -- # locking_app_on_unlocked_coremask 00:08:39.026 07:09:12 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=108876 00:08:39.026 07:09:12 -- event/cpu_locks.sh@99 -- # waitforlisten 108876 /var/tmp/spdk.sock 00:08:39.026 07:09:12 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:39.026 07:09:12 -- common/autotest_common.sh@817 -- # '[' -z 108876 ']' 00:08:39.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.026 07:09:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.026 07:09:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:39.026 07:09:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.026 07:09:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:39.026 07:09:12 -- common/autotest_common.sh@10 -- # set +x 00:08:39.026 [2024-02-13 07:09:12.670007] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:08:39.026 [2024-02-13 07:09:12.670202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108876 ] 00:08:39.285 [2024-02-13 07:09:12.831904] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:39.285 [2024-02-13 07:09:12.832075] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.544 [2024-02-13 07:09:13.050246] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:39.544 [2024-02-13 07:09:13.050553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.964 07:09:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:40.964 07:09:14 -- common/autotest_common.sh@850 -- # return 0 00:08:40.964 07:09:14 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=108900 00:08:40.964 07:09:14 -- event/cpu_locks.sh@103 -- # waitforlisten 108900 /var/tmp/spdk2.sock 00:08:40.964 07:09:14 -- common/autotest_common.sh@817 -- # '[' -z 108900 ']' 00:08:40.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:40.964 07:09:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:40.964 07:09:14 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:40.964 07:09:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:40.964 07:09:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:40.964 07:09:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:40.964 07:09:14 -- common/autotest_common.sh@10 -- # set +x 00:08:40.964 [2024-02-13 07:09:14.480835] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:08:40.964 [2024-02-13 07:09:14.481025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108900 ] 00:08:41.260 [2024-02-13 07:09:14.659829] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.518 [2024-02-13 07:09:15.146216] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:41.518 [2024-02-13 07:09:15.146493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.422 07:09:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:43.422 07:09:16 -- common/autotest_common.sh@850 -- # return 0 00:08:43.422 07:09:16 -- event/cpu_locks.sh@105 -- # locks_exist 108900 00:08:43.422 07:09:16 -- event/cpu_locks.sh@22 -- # lslocks -p 108900 00:08:43.422 07:09:16 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:43.681 07:09:17 -- event/cpu_locks.sh@107 -- # killprocess 108876 00:08:43.681 07:09:17 -- common/autotest_common.sh@924 -- # '[' -z 108876 ']' 00:08:43.681 07:09:17 -- common/autotest_common.sh@928 -- # kill -0 108876 00:08:43.681 07:09:17 -- common/autotest_common.sh@929 -- # uname 00:08:43.681 07:09:17 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:08:43.681 07:09:17 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 108876 00:08:43.681 07:09:17 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:08:43.681 07:09:17 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:08:43.681 killing process with pid 108876 00:08:43.681 07:09:17 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 108876' 00:08:43.681 07:09:17 -- common/autotest_common.sh@943 -- # kill 108876 00:08:43.681 07:09:17 -- common/autotest_common.sh@948 -- # wait 108876 00:08:48.951 07:09:21 -- event/cpu_locks.sh@108 -- # killprocess 108900 00:08:48.951 07:09:21 -- common/autotest_common.sh@924 -- # '[' -z 108900 ']' 00:08:48.951 07:09:21 -- common/autotest_common.sh@928 -- # kill -0 108900 00:08:48.951 07:09:21 -- common/autotest_common.sh@929 -- # uname 00:08:48.951 07:09:21 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:08:48.951 07:09:21 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 108900 00:08:48.951 07:09:21 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:08:48.951 07:09:21 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:08:48.951 07:09:21 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 108900' 00:08:48.951 killing process with pid 108900 00:08:48.951 07:09:21 -- common/autotest_common.sh@943 -- # kill 108900 00:08:48.951 07:09:21 -- common/autotest_common.sh@948 -- # wait 108900 00:08:50.349 00:08:50.349 real 0m11.431s 00:08:50.349 user 0m12.243s 00:08:50.349 sys 0m1.438s 00:08:50.349 07:09:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:50.349 07:09:24 -- common/autotest_common.sh@10 -- # set +x 00:08:50.349 ************************************ 00:08:50.349 END TEST locking_app_on_unlocked_coremask 00:08:50.349 ************************************ 00:08:50.608 07:09:24 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:50.608 07:09:24 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:08:50.608 07:09:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:50.608 07:09:24 -- common/autotest_common.sh@10 -- # set +x 00:08:50.608 ************************************ 00:08:50.608 START TEST locking_app_on_locked_coremask 00:08:50.608 ************************************ 00:08:50.608 07:09:24 -- common/autotest_common.sh@1102 -- # locking_app_on_locked_coremask 00:08:50.608 07:09:24 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=109075 00:08:50.608 07:09:24 -- event/cpu_locks.sh@116 -- # waitforlisten 109075 /var/tmp/spdk.sock 00:08:50.608 07:09:24 -- common/autotest_common.sh@817 -- # '[' -z 109075 ']' 00:08:50.608 07:09:24 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:50.608 07:09:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.608 07:09:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:50.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.608 07:09:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.608 07:09:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:50.608 07:09:24 -- common/autotest_common.sh@10 -- # set +x 00:08:50.608 [2024-02-13 07:09:24.171480] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:08:50.608 [2024-02-13 07:09:24.171880] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109075 ] 00:08:50.867 [2024-02-13 07:09:24.338636] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.125 [2024-02-13 07:09:24.573233] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:51.125 [2024-02-13 07:09:24.573495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.502 07:09:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:52.502 07:09:25 -- common/autotest_common.sh@850 -- # return 0 00:08:52.502 07:09:25 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=109110 00:08:52.502 07:09:25 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 109110 /var/tmp/spdk2.sock 00:08:52.502 07:09:25 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:52.502 07:09:25 -- common/autotest_common.sh@638 -- # local es=0 00:08:52.502 07:09:25 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 109110 /var/tmp/spdk2.sock 00:08:52.502 07:09:25 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:08:52.502 07:09:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:52.502 07:09:25 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:08:52.502 07:09:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:52.502 07:09:25 -- common/autotest_common.sh@641 -- # waitforlisten 109110 /var/tmp/spdk2.sock 00:08:52.502 07:09:25 -- common/autotest_common.sh@817 -- # '[' -z 109110 ']' 00:08:52.502 07:09:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:52.502 07:09:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:52.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:52.502 07:09:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:52.502 07:09:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:52.502 07:09:25 -- common/autotest_common.sh@10 -- # set +x 00:08:52.502 [2024-02-13 07:09:25.948708] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:08:52.502 [2024-02-13 07:09:25.948926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109110 ] 00:08:52.502 [2024-02-13 07:09:26.133243] app.c: 663:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 109075 has claimed it. 00:08:52.502 [2024-02-13 07:09:26.133360] app.c: 789:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:53.067 ERROR: process (pid: 109110) is no longer running 00:08:53.067 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (109110) - No such process 00:08:53.067 07:09:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:53.067 07:09:26 -- common/autotest_common.sh@850 -- # return 1 00:08:53.067 07:09:26 -- common/autotest_common.sh@641 -- # es=1 00:08:53.067 07:09:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:53.067 07:09:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:53.067 07:09:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:53.067 07:09:26 -- event/cpu_locks.sh@122 -- # locks_exist 109075 00:08:53.067 07:09:26 -- event/cpu_locks.sh@22 -- # lslocks -p 109075 00:08:53.067 07:09:26 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:53.326 07:09:26 -- event/cpu_locks.sh@124 -- # killprocess 109075 00:08:53.326 07:09:26 -- common/autotest_common.sh@924 -- # '[' -z 109075 ']' 00:08:53.326 07:09:26 -- common/autotest_common.sh@928 -- # kill -0 109075 00:08:53.326 07:09:26 -- common/autotest_common.sh@929 -- # uname 00:08:53.326 07:09:26 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:08:53.326 07:09:26 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 109075 00:08:53.326 07:09:26 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:08:53.326 07:09:26 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:08:53.326 07:09:26 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 109075' 00:08:53.326 killing process with pid 109075 00:08:53.326 07:09:26 -- common/autotest_common.sh@943 -- # kill 109075 00:08:53.326 07:09:26 -- common/autotest_common.sh@948 -- # wait 109075 00:08:55.858 00:08:55.858 real 0m5.286s 00:08:55.858 user 0m5.655s 00:08:55.858 sys 0m0.967s 00:08:55.858 07:09:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:55.858 ************************************ 00:08:55.858 END TEST locking_app_on_locked_coremask 00:08:55.858 ************************************ 00:08:55.858 07:09:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.858 07:09:29 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:55.858 07:09:29 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:08:55.858 07:09:29 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:08:55.858 07:09:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.858 ************************************ 00:08:55.858 START TEST locking_overlapped_coremask 00:08:55.858 ************************************ 00:08:55.858 07:09:29 -- common/autotest_common.sh@1102 -- # locking_overlapped_coremask 00:08:55.858 07:09:29 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=109179 00:08:55.858 07:09:29 -- event/cpu_locks.sh@133 -- # waitforlisten 109179 /var/tmp/spdk.sock 00:08:55.858 07:09:29 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:08:55.858 07:09:29 -- common/autotest_common.sh@817 -- # '[' -z 109179 ']' 00:08:55.858 07:09:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.858 07:09:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:55.858 07:09:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.858 07:09:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:55.858 07:09:29 -- common/autotest_common.sh@10 -- # set +x 00:08:55.858 [2024-02-13 07:09:29.512770] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:08:55.858 [2024-02-13 07:09:29.512941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109179 ] 00:08:56.152 [2024-02-13 07:09:29.696469] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:56.417 [2024-02-13 07:09:29.998124] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:56.417 [2024-02-13 07:09:29.998551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.417 [2024-02-13 07:09:29.998677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.417 [2024-02-13 07:09:29.998717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.794 07:09:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:57.794 07:09:31 -- common/autotest_common.sh@850 -- # return 0 00:08:57.794 07:09:31 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=109235 00:08:57.794 07:09:31 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:57.794 07:09:31 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 109235 /var/tmp/spdk2.sock 00:08:57.794 07:09:31 -- common/autotest_common.sh@638 -- # local es=0 00:08:57.794 07:09:31 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 109235 /var/tmp/spdk2.sock 00:08:57.794 07:09:31 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:08:57.794 07:09:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:57.794 07:09:31 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:08:57.794 07:09:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:57.794 07:09:31 -- common/autotest_common.sh@641 -- # waitforlisten 109235 /var/tmp/spdk2.sock 00:08:57.794 07:09:31 -- common/autotest_common.sh@817 -- # '[' -z 109235 ']' 00:08:57.794 07:09:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:57.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:57.794 07:09:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:57.794 07:09:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:57.794 07:09:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:57.794 07:09:31 -- common/autotest_common.sh@10 -- # set +x 00:08:57.794 [2024-02-13 07:09:31.310349] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:08:57.795 [2024-02-13 07:09:31.310577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109235 ] 00:08:58.053 [2024-02-13 07:09:31.518071] app.c: 663:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109179 has claimed it. 00:08:58.053 [2024-02-13 07:09:31.518166] app.c: 789:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:58.312 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (109235) - No such process 00:08:58.312 ERROR: process (pid: 109235) is no longer running 00:08:58.312 07:09:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:58.312 07:09:31 -- common/autotest_common.sh@850 -- # return 1 00:08:58.312 07:09:31 -- common/autotest_common.sh@641 -- # es=1 00:08:58.312 07:09:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:58.312 07:09:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:58.312 07:09:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:58.312 07:09:31 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:58.312 07:09:31 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:58.312 07:09:31 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:58.312 07:09:31 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:58.312 07:09:31 -- event/cpu_locks.sh@141 -- # killprocess 109179 00:08:58.312 07:09:31 -- common/autotest_common.sh@924 -- # '[' -z 109179 ']' 00:08:58.312 07:09:31 -- common/autotest_common.sh@928 -- # kill -0 109179 00:08:58.312 07:09:31 -- common/autotest_common.sh@929 -- # uname 00:08:58.312 07:09:31 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:08:58.312 07:09:31 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 109179 00:08:58.570 07:09:32 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:08:58.570 07:09:32 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:08:58.570 07:09:32 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 109179' 00:08:58.570 killing process with pid 109179 00:08:58.570 07:09:32 -- common/autotest_common.sh@943 -- # kill 109179 00:08:58.570 07:09:32 -- common/autotest_common.sh@948 -- # wait 109179 00:09:01.104 00:09:01.104 real 0m5.051s 00:09:01.104 user 0m13.365s 00:09:01.104 sys 0m0.845s 00:09:01.104 ************************************ 00:09:01.104 END TEST locking_overlapped_coremask 00:09:01.104 ************************************ 00:09:01.104 07:09:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:01.104 07:09:34 -- common/autotest_common.sh@10 -- # set +x 00:09:01.104 07:09:34 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:01.104 07:09:34 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:01.104 07:09:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:01.104 07:09:34 -- common/autotest_common.sh@10 -- # set +x 00:09:01.104 ************************************ 00:09:01.104 START TEST locking_overlapped_coremask_via_rpc 00:09:01.104 ************************************ 00:09:01.105 07:09:34 -- common/autotest_common.sh@1102 -- # locking_overlapped_coremask_via_rpc 00:09:01.105 07:09:34 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=109304 00:09:01.105 07:09:34 -- event/cpu_locks.sh@149 -- # waitforlisten 109304 /var/tmp/spdk.sock 00:09:01.105 07:09:34 -- common/autotest_common.sh@817 -- # '[' -z 109304 ']' 00:09:01.105 07:09:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.105 07:09:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:01.105 07:09:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.105 07:09:34 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:01.105 07:09:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:01.105 07:09:34 -- common/autotest_common.sh@10 -- # set +x 00:09:01.105 [2024-02-13 07:09:34.624474] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:01.105 [2024-02-13 07:09:34.624673] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109304 ] 00:09:01.363 [2024-02-13 07:09:34.801036] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:01.363 [2024-02-13 07:09:34.801115] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:01.363 [2024-02-13 07:09:35.045118] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:01.363 [2024-02-13 07:09:35.045495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.363 [2024-02-13 07:09:35.045727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.363 [2024-02-13 07:09:35.045729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:02.740 07:09:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:02.740 07:09:36 -- common/autotest_common.sh@850 -- # return 0 00:09:02.740 07:09:36 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=109341 00:09:02.740 07:09:36 -- event/cpu_locks.sh@153 -- # waitforlisten 109341 /var/tmp/spdk2.sock 00:09:02.740 07:09:36 -- common/autotest_common.sh@817 -- # '[' -z 109341 ']' 00:09:02.740 07:09:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:02.740 07:09:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:02.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:02.740 07:09:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:02.740 07:09:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:02.740 07:09:36 -- common/autotest_common.sh@10 -- # set +x 00:09:02.740 07:09:36 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:02.740 [2024-02-13 07:09:36.365026] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:02.740 [2024-02-13 07:09:36.365571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109341 ] 00:09:02.999 [2024-02-13 07:09:36.574656] app.c: 793:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:02.999 [2024-02-13 07:09:36.574734] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:03.566 [2024-02-13 07:09:37.056959] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:03.566 [2024-02-13 07:09:37.057365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.566 [2024-02-13 07:09:37.073192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:03.566 [2024-02-13 07:09:37.073194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.469 07:09:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:05.469 07:09:38 -- common/autotest_common.sh@850 -- # return 0 00:09:05.469 07:09:38 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:05.469 07:09:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.469 07:09:38 -- common/autotest_common.sh@10 -- # set +x 00:09:05.469 07:09:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:05.469 07:09:38 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:05.469 07:09:38 -- common/autotest_common.sh@638 -- # local es=0 00:09:05.469 07:09:38 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:05.469 07:09:38 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:09:05.469 07:09:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:05.469 07:09:38 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:09:05.469 07:09:38 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:05.469 07:09:38 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:05.469 07:09:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:05.469 07:09:38 -- common/autotest_common.sh@10 -- # set +x 00:09:05.469 [2024-02-13 07:09:38.777319] app.c: 663:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109304 has claimed it. 00:09:05.469 request: 00:09:05.469 { 00:09:05.469 "method": "framework_enable_cpumask_locks", 00:09:05.469 "req_id": 1 00:09:05.469 } 00:09:05.469 Got JSON-RPC error response 00:09:05.469 response: 00:09:05.469 { 00:09:05.469 "code": -32603, 00:09:05.469 "message": "Failed to claim CPU core: 2" 00:09:05.469 } 00:09:05.469 07:09:38 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:09:05.469 07:09:38 -- common/autotest_common.sh@641 -- # es=1 00:09:05.469 07:09:38 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:05.469 07:09:38 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:05.469 07:09:38 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:05.469 07:09:38 -- event/cpu_locks.sh@158 -- # waitforlisten 109304 /var/tmp/spdk.sock 00:09:05.469 07:09:38 -- common/autotest_common.sh@817 -- # '[' -z 109304 ']' 00:09:05.469 07:09:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.469 07:09:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:05.469 07:09:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.469 07:09:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:05.469 07:09:38 -- common/autotest_common.sh@10 -- # set +x 00:09:05.469 07:09:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:05.469 07:09:39 -- common/autotest_common.sh@850 -- # return 0 00:09:05.469 07:09:39 -- event/cpu_locks.sh@159 -- # waitforlisten 109341 /var/tmp/spdk2.sock 00:09:05.469 07:09:39 -- common/autotest_common.sh@817 -- # '[' -z 109341 ']' 00:09:05.469 07:09:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:05.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:05.469 07:09:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:05.469 07:09:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:05.469 07:09:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:05.469 07:09:39 -- common/autotest_common.sh@10 -- # set +x 00:09:05.727 07:09:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:05.727 07:09:39 -- common/autotest_common.sh@850 -- # return 0 00:09:05.727 07:09:39 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:05.727 07:09:39 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:05.727 07:09:39 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:05.727 07:09:39 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:05.727 00:09:05.727 real 0m4.749s 00:09:05.727 user 0m1.945s 00:09:05.727 sys 0m0.247s 00:09:05.727 07:09:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:05.727 07:09:39 -- common/autotest_common.sh@10 -- # set +x 00:09:05.727 ************************************ 00:09:05.727 END TEST locking_overlapped_coremask_via_rpc 00:09:05.727 ************************************ 00:09:05.727 07:09:39 -- event/cpu_locks.sh@174 -- # cleanup 00:09:05.727 07:09:39 -- event/cpu_locks.sh@15 -- # [[ -z 109304 ]] 00:09:05.727 07:09:39 -- event/cpu_locks.sh@15 -- # killprocess 109304 00:09:05.727 07:09:39 -- common/autotest_common.sh@924 -- # '[' -z 109304 ']' 00:09:05.727 07:09:39 -- common/autotest_common.sh@928 -- # kill -0 109304 00:09:05.727 07:09:39 -- common/autotest_common.sh@929 -- # uname 00:09:05.727 07:09:39 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:09:05.727 07:09:39 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 109304 00:09:05.727 07:09:39 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:09:05.727 killing process with pid 109304 00:09:05.727 07:09:39 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:09:05.727 07:09:39 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 109304' 00:09:05.727 07:09:39 -- common/autotest_common.sh@943 -- # kill 109304 00:09:05.727 07:09:39 -- common/autotest_common.sh@948 -- # wait 109304 00:09:08.255 07:09:41 -- event/cpu_locks.sh@16 -- # [[ -z 109341 ]] 00:09:08.255 07:09:41 -- event/cpu_locks.sh@16 -- # killprocess 109341 00:09:08.255 07:09:41 -- common/autotest_common.sh@924 -- # '[' -z 109341 ']' 00:09:08.255 07:09:41 -- common/autotest_common.sh@928 -- # kill -0 109341 00:09:08.255 07:09:41 -- common/autotest_common.sh@929 -- # uname 00:09:08.255 07:09:41 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:09:08.255 07:09:41 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 109341 00:09:08.255 07:09:41 -- common/autotest_common.sh@930 -- # process_name=reactor_2 00:09:08.255 killing process with pid 109341 00:09:08.255 07:09:41 -- common/autotest_common.sh@934 -- # '[' reactor_2 = sudo ']' 00:09:08.255 07:09:41 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 109341' 00:09:08.255 07:09:41 -- common/autotest_common.sh@943 -- # kill 109341 00:09:08.255 07:09:41 -- common/autotest_common.sh@948 -- # wait 109341 00:09:10.785 07:09:43 -- event/cpu_locks.sh@18 -- # rm -f 00:09:10.785 07:09:43 -- event/cpu_locks.sh@1 -- # cleanup 00:09:10.785 07:09:43 -- event/cpu_locks.sh@15 -- # [[ -z 109304 ]] 00:09:10.785 07:09:43 -- event/cpu_locks.sh@15 -- # killprocess 109304 00:09:10.785 07:09:43 -- common/autotest_common.sh@924 -- # '[' -z 109304 ']' 00:09:10.785 07:09:43 -- common/autotest_common.sh@928 -- # kill -0 109304 00:09:10.785 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 928: kill: (109304) - No such process 00:09:10.785 07:09:43 -- common/autotest_common.sh@951 -- # echo 'Process with pid 109304 is not found' 00:09:10.785 Process with pid 109304 is not found 00:09:10.785 07:09:43 -- event/cpu_locks.sh@16 -- # [[ -z 109341 ]] 00:09:10.785 07:09:43 -- event/cpu_locks.sh@16 -- # killprocess 109341 00:09:10.785 07:09:43 -- common/autotest_common.sh@924 -- # '[' -z 109341 ']' 00:09:10.785 Process with pid 109341 is not found 00:09:10.785 07:09:43 -- common/autotest_common.sh@928 -- # kill -0 109341 00:09:10.785 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 928: kill: (109341) - No such process 00:09:10.785 07:09:43 -- common/autotest_common.sh@951 -- # echo 'Process with pid 109341 is not found' 00:09:10.785 07:09:43 -- event/cpu_locks.sh@18 -- # rm -f 00:09:10.785 ************************************ 00:09:10.785 END TEST cpu_locks 00:09:10.785 ************************************ 00:09:10.785 00:09:10.785 real 0m50.898s 00:09:10.785 user 1m27.557s 00:09:10.785 sys 0m7.640s 00:09:10.785 07:09:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:10.785 07:09:43 -- common/autotest_common.sh@10 -- # set +x 00:09:10.785 00:09:10.785 real 1m24.209s 00:09:10.785 user 2m32.090s 00:09:10.785 sys 0m11.748s 00:09:10.785 07:09:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:10.785 07:09:43 -- common/autotest_common.sh@10 -- # set +x 00:09:10.785 ************************************ 00:09:10.785 END TEST event 00:09:10.785 ************************************ 00:09:10.785 07:09:43 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:10.785 07:09:43 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:10.785 07:09:43 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:10.785 07:09:43 -- common/autotest_common.sh@10 -- # set +x 00:09:10.785 ************************************ 00:09:10.785 START TEST thread 00:09:10.785 ************************************ 00:09:10.785 07:09:43 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:10.785 * Looking for test storage... 00:09:10.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:10.785 07:09:44 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:10.785 07:09:44 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:09:10.785 07:09:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:10.785 07:09:44 -- common/autotest_common.sh@10 -- # set +x 00:09:10.785 ************************************ 00:09:10.785 START TEST thread_poller_perf 00:09:10.785 ************************************ 00:09:10.785 07:09:44 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:10.785 [2024-02-13 07:09:44.101362] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:10.785 [2024-02-13 07:09:44.102518] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109554 ] 00:09:10.785 [2024-02-13 07:09:44.290134] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.043 [2024-02-13 07:09:44.564666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.043 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:12.458 ====================================== 00:09:12.458 busy:2213857738 (cyc) 00:09:12.458 total_run_count: 300000 00:09:12.458 tsc_hz: 2200000000 (cyc) 00:09:12.458 ====================================== 00:09:12.458 poller_cost: 7379 (cyc), 3354 (nsec) 00:09:12.458 ************************************ 00:09:12.458 END TEST thread_poller_perf 00:09:12.458 ************************************ 00:09:12.458 00:09:12.458 real 0m1.907s 00:09:12.458 user 0m1.681s 00:09:12.458 sys 0m0.124s 00:09:12.458 07:09:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:12.458 07:09:45 -- common/autotest_common.sh@10 -- # set +x 00:09:12.458 07:09:45 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:12.458 07:09:45 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:09:12.458 07:09:45 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:12.458 07:09:45 -- common/autotest_common.sh@10 -- # set +x 00:09:12.458 ************************************ 00:09:12.458 START TEST thread_poller_perf 00:09:12.458 ************************************ 00:09:12.458 07:09:46 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:12.458 [2024-02-13 07:09:46.057427] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:12.458 [2024-02-13 07:09:46.057796] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109597 ] 00:09:12.716 [2024-02-13 07:09:46.227796] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.975 [2024-02-13 07:09:46.441109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.975 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:14.353 ====================================== 00:09:14.353 busy:2204965114 (cyc) 00:09:14.353 total_run_count: 3966000 00:09:14.353 tsc_hz: 2200000000 (cyc) 00:09:14.353 ====================================== 00:09:14.353 poller_cost: 555 (cyc), 252 (nsec) 00:09:14.353 00:09:14.353 real 0m1.805s 00:09:14.353 user 0m1.591s 00:09:14.353 sys 0m0.113s 00:09:14.353 07:09:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:14.353 07:09:47 -- common/autotest_common.sh@10 -- # set +x 00:09:14.353 ************************************ 00:09:14.353 END TEST thread_poller_perf 00:09:14.353 ************************************ 00:09:14.353 07:09:47 -- thread/thread.sh@17 -- # [[ n != \y ]] 00:09:14.353 07:09:47 -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:14.353 07:09:47 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:14.353 07:09:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:14.353 07:09:47 -- common/autotest_common.sh@10 -- # set +x 00:09:14.353 ************************************ 00:09:14.353 START TEST thread_spdk_lock 00:09:14.353 ************************************ 00:09:14.353 07:09:47 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:09:14.353 [2024-02-13 07:09:47.916934] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:14.353 [2024-02-13 07:09:47.917510] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109646 ] 00:09:14.612 [2024-02-13 07:09:48.093010] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:14.871 [2024-02-13 07:09:48.312264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.871 [2024-02-13 07:09:48.312260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.439 [2024-02-13 07:09:48.863487] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 955:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:15.439 [2024-02-13 07:09:48.863640] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3062:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:09:15.439 [2024-02-13 07:09:48.863678] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3017:sspin_stacks_print: *ERROR*: spinlock 0x5611096630c0 00:09:15.439 [2024-02-13 07:09:48.871761] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:15.439 [2024-02-13 07:09:48.871865] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1016:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:15.439 [2024-02-13 07:09:48.871903] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 850:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:09:15.697 Starting test contend 00:09:15.697 Worker Delay Wait us Hold us Total us 00:09:15.697 0 3 134412 208582 342994 00:09:15.697 1 5 64022 309740 373762 00:09:15.697 PASS test contend 00:09:15.697 Starting test hold_by_poller 00:09:15.697 PASS test hold_by_poller 00:09:15.697 Starting test hold_by_message 00:09:15.697 PASS test hold_by_message 00:09:15.697 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:09:15.697 100014 assertions passed 00:09:15.697 0 assertions failed 00:09:15.697 00:09:15.697 real 0m1.361s 00:09:15.697 user 0m1.712s 00:09:15.697 sys 0m0.109s 00:09:15.697 07:09:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:15.697 ************************************ 00:09:15.697 END TEST thread_spdk_lock 00:09:15.697 ************************************ 00:09:15.697 07:09:49 -- common/autotest_common.sh@10 -- # set +x 00:09:15.697 ************************************ 00:09:15.697 END TEST thread 00:09:15.697 ************************************ 00:09:15.697 00:09:15.697 real 0m5.307s 00:09:15.697 user 0m5.122s 00:09:15.697 sys 0m0.430s 00:09:15.697 07:09:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:15.697 07:09:49 -- common/autotest_common.sh@10 -- # set +x 00:09:15.697 07:09:49 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:15.697 07:09:49 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:09:15.697 07:09:49 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:15.697 07:09:49 -- common/autotest_common.sh@10 -- # set +x 00:09:15.697 ************************************ 00:09:15.697 START TEST accel 00:09:15.697 ************************************ 00:09:15.697 07:09:49 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:09:15.955 * Looking for test storage... 00:09:15.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:09:15.955 07:09:49 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:09:15.955 07:09:49 -- accel/accel.sh@74 -- # get_expected_opcs 00:09:15.955 07:09:49 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:15.955 07:09:49 -- accel/accel.sh@59 -- # spdk_tgt_pid=109730 00:09:15.955 07:09:49 -- accel/accel.sh@60 -- # waitforlisten 109730 00:09:15.955 07:09:49 -- common/autotest_common.sh@817 -- # '[' -z 109730 ']' 00:09:15.955 07:09:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.955 07:09:49 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:09:15.955 07:09:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:15.955 07:09:49 -- accel/accel.sh@58 -- # build_accel_config 00:09:15.955 07:09:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.955 07:09:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:15.955 07:09:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:15.955 07:09:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:15.955 07:09:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:15.955 07:09:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:15.955 07:09:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:15.955 07:09:49 -- accel/accel.sh@41 -- # local IFS=, 00:09:15.955 07:09:49 -- accel/accel.sh@42 -- # jq -r . 00:09:15.955 07:09:49 -- common/autotest_common.sh@10 -- # set +x 00:09:15.955 [2024-02-13 07:09:49.497241] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:15.955 [2024-02-13 07:09:49.498448] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109730 ] 00:09:16.214 [2024-02-13 07:09:49.664785] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.214 [2024-02-13 07:09:49.858103] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:16.214 [2024-02-13 07:09:49.858402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.214 [2024-02-13 07:09:49.858473] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:17.587 07:09:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:17.587 07:09:51 -- common/autotest_common.sh@850 -- # return 0 00:09:17.587 07:09:51 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:09:17.587 07:09:51 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:09:17.587 07:09:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:17.587 07:09:51 -- common/autotest_common.sh@10 -- # set +x 00:09:17.587 07:09:51 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:09:17.587 07:09:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:17.846 07:09:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:17.846 07:09:51 -- accel/accel.sh@64 -- # IFS== 00:09:17.846 07:09:51 -- accel/accel.sh@64 -- # read -r opc module 00:09:17.846 07:09:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:17.846 07:09:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:17.846 07:09:51 -- accel/accel.sh@64 -- # IFS== 00:09:17.846 07:09:51 -- accel/accel.sh@64 -- # read -r opc module 00:09:17.846 07:09:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:17.846 07:09:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:17.846 07:09:51 -- accel/accel.sh@64 -- # IFS== 00:09:17.846 07:09:51 -- accel/accel.sh@64 -- # read -r opc module 00:09:17.846 07:09:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:17.846 07:09:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:17.846 07:09:51 -- accel/accel.sh@64 -- # IFS== 00:09:17.846 07:09:51 -- accel/accel.sh@64 -- # read -r opc module 00:09:17.846 07:09:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:17.846 07:09:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:17.846 07:09:51 -- accel/accel.sh@64 -- # IFS== 00:09:17.846 07:09:51 -- accel/accel.sh@64 -- # read -r opc module 00:09:17.846 07:09:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:17.846 07:09:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:17.846 07:09:51 -- accel/accel.sh@64 -- # IFS== 00:09:17.846 07:09:51 -- accel/accel.sh@64 -- # read -r opc module 00:09:17.846 07:09:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:17.846 07:09:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:17.846 07:09:51 -- accel/accel.sh@64 -- # IFS== 00:09:17.847 07:09:51 -- accel/accel.sh@64 -- # read -r opc module 00:09:17.847 07:09:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:17.847 07:09:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:17.847 07:09:51 -- accel/accel.sh@64 -- # IFS== 00:09:17.847 07:09:51 -- accel/accel.sh@64 -- # read -r opc module 00:09:17.847 07:09:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:17.847 07:09:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:17.847 07:09:51 -- accel/accel.sh@64 -- # IFS== 00:09:17.847 07:09:51 -- accel/accel.sh@64 -- # read -r opc module 00:09:17.847 07:09:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:17.847 07:09:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:17.847 07:09:51 -- accel/accel.sh@64 -- # IFS== 00:09:17.847 07:09:51 -- accel/accel.sh@64 -- # read -r opc module 00:09:17.847 07:09:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:17.847 07:09:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:17.847 07:09:51 -- accel/accel.sh@64 -- # IFS== 00:09:17.847 07:09:51 -- accel/accel.sh@64 -- # read -r opc module 00:09:17.847 07:09:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:17.847 07:09:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:17.847 07:09:51 -- accel/accel.sh@64 -- # IFS== 00:09:17.847 07:09:51 -- accel/accel.sh@64 -- # read -r opc module 00:09:17.847 07:09:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:17.847 07:09:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:17.847 07:09:51 -- accel/accel.sh@64 -- # IFS== 00:09:17.847 07:09:51 -- accel/accel.sh@64 -- # read -r opc module 00:09:17.847 07:09:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:17.847 07:09:51 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:09:17.847 07:09:51 -- accel/accel.sh@64 -- # IFS== 00:09:17.847 07:09:51 -- accel/accel.sh@64 -- # read -r opc module 00:09:17.847 07:09:51 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:09:17.847 07:09:51 -- accel/accel.sh@67 -- # killprocess 109730 00:09:17.847 07:09:51 -- common/autotest_common.sh@924 -- # '[' -z 109730 ']' 00:09:17.847 07:09:51 -- common/autotest_common.sh@928 -- # kill -0 109730 00:09:17.847 07:09:51 -- common/autotest_common.sh@929 -- # uname 00:09:17.847 07:09:51 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:09:17.847 07:09:51 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 109730 00:09:17.847 killing process with pid 109730 00:09:17.847 07:09:51 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:09:17.847 07:09:51 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:09:17.847 07:09:51 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 109730' 00:09:17.847 07:09:51 -- common/autotest_common.sh@943 -- # kill 109730 00:09:17.847 07:09:51 -- common/autotest_common.sh@948 -- # wait 109730 00:09:17.847 [2024-02-13 07:09:51.327776] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:19.749 07:09:53 -- accel/accel.sh@68 -- # trap - ERR 00:09:19.749 07:09:53 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:09:19.749 07:09:53 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:09:19.749 07:09:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:19.750 07:09:53 -- common/autotest_common.sh@10 -- # set +x 00:09:19.750 07:09:53 -- common/autotest_common.sh@1102 -- # accel_perf -h 00:09:19.750 07:09:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:09:19.750 07:09:53 -- accel/accel.sh@12 -- # build_accel_config 00:09:19.750 07:09:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:19.750 07:09:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:19.750 07:09:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:19.750 07:09:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:19.750 07:09:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:19.750 07:09:53 -- accel/accel.sh@41 -- # local IFS=, 00:09:19.750 07:09:53 -- accel/accel.sh@42 -- # jq -r . 00:09:19.750 07:09:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:19.750 07:09:53 -- common/autotest_common.sh@10 -- # set +x 00:09:20.009 07:09:53 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:09:20.009 07:09:53 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:09:20.009 07:09:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:20.009 07:09:53 -- common/autotest_common.sh@10 -- # set +x 00:09:20.009 ************************************ 00:09:20.009 START TEST accel_missing_filename 00:09:20.009 ************************************ 00:09:20.009 07:09:53 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w compress 00:09:20.009 07:09:53 -- common/autotest_common.sh@638 -- # local es=0 00:09:20.009 07:09:53 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:09:20.009 07:09:53 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:09:20.009 07:09:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:20.009 07:09:53 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:09:20.009 07:09:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:20.009 07:09:53 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:09:20.009 07:09:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:09:20.009 07:09:53 -- accel/accel.sh@12 -- # build_accel_config 00:09:20.009 07:09:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:20.009 07:09:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:20.009 07:09:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:20.009 07:09:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:20.009 07:09:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:20.009 07:09:53 -- accel/accel.sh@41 -- # local IFS=, 00:09:20.009 07:09:53 -- accel/accel.sh@42 -- # jq -r . 00:09:20.009 [2024-02-13 07:09:53.522472] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:20.009 [2024-02-13 07:09:53.522755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109847 ] 00:09:20.009 [2024-02-13 07:09:53.698878] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.268 [2024-02-13 07:09:53.900274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.268 [2024-02-13 07:09:53.900462] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:20.527 [2024-02-13 07:09:54.081663] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:20.527 [2024-02-13 07:09:54.081799] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:21.094 [2024-02-13 07:09:54.567575] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:21.353 A filename is required. 00:09:21.353 07:09:54 -- common/autotest_common.sh@641 -- # es=234 00:09:21.353 07:09:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:21.353 07:09:54 -- common/autotest_common.sh@650 -- # es=106 00:09:21.353 07:09:54 -- common/autotest_common.sh@651 -- # case "$es" in 00:09:21.353 07:09:54 -- common/autotest_common.sh@658 -- # es=1 00:09:21.353 07:09:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:21.353 00:09:21.353 real 0m1.479s 00:09:21.353 user 0m1.196s 00:09:21.353 sys 0m0.239s 00:09:21.353 07:09:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:21.353 07:09:54 -- common/autotest_common.sh@10 -- # set +x 00:09:21.353 ************************************ 00:09:21.353 END TEST accel_missing_filename 00:09:21.353 ************************************ 00:09:21.353 07:09:54 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:21.353 07:09:54 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:09:21.353 07:09:54 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:21.353 07:09:54 -- common/autotest_common.sh@10 -- # set +x 00:09:21.353 ************************************ 00:09:21.353 START TEST accel_compress_verify 00:09:21.353 ************************************ 00:09:21.353 07:09:55 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:21.353 07:09:55 -- common/autotest_common.sh@638 -- # local es=0 00:09:21.353 07:09:55 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:21.353 07:09:55 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:09:21.353 07:09:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:21.353 07:09:55 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:09:21.353 07:09:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:21.353 07:09:55 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:21.353 07:09:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:09:21.353 07:09:55 -- accel/accel.sh@12 -- # build_accel_config 00:09:21.353 07:09:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:21.353 07:09:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:21.353 07:09:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:21.353 07:09:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:21.353 07:09:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:21.353 07:09:55 -- accel/accel.sh@41 -- # local IFS=, 00:09:21.353 07:09:55 -- accel/accel.sh@42 -- # jq -r . 00:09:21.612 [2024-02-13 07:09:55.053868] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:21.612 [2024-02-13 07:09:55.054876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109886 ] 00:09:21.612 [2024-02-13 07:09:55.226099] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.871 [2024-02-13 07:09:55.447394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.871 [2024-02-13 07:09:55.447577] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:22.131 [2024-02-13 07:09:55.658844] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:22.131 [2024-02-13 07:09:55.658986] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:22.698 [2024-02-13 07:09:56.163699] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:09:22.957 00:09:22.957 Compression does not support the verify option, aborting. 00:09:22.957 07:09:56 -- common/autotest_common.sh@641 -- # es=161 00:09:22.957 07:09:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:22.957 07:09:56 -- common/autotest_common.sh@650 -- # es=33 00:09:22.957 07:09:56 -- common/autotest_common.sh@651 -- # case "$es" in 00:09:22.957 07:09:56 -- common/autotest_common.sh@658 -- # es=1 00:09:22.957 07:09:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:22.957 00:09:22.957 real 0m1.563s 00:09:22.957 user 0m1.303s 00:09:22.957 sys 0m0.218s 00:09:22.957 07:09:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:22.957 07:09:56 -- common/autotest_common.sh@10 -- # set +x 00:09:22.957 ************************************ 00:09:22.957 END TEST accel_compress_verify 00:09:22.957 ************************************ 00:09:22.957 07:09:56 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:09:22.957 07:09:56 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:09:22.957 07:09:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:22.957 07:09:56 -- common/autotest_common.sh@10 -- # set +x 00:09:22.957 ************************************ 00:09:22.957 START TEST accel_wrong_workload 00:09:22.957 ************************************ 00:09:22.957 07:09:56 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w foobar 00:09:22.957 07:09:56 -- common/autotest_common.sh@638 -- # local es=0 00:09:22.957 07:09:56 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:09:22.957 07:09:56 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:09:22.957 07:09:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:22.957 07:09:56 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:09:22.957 07:09:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:22.957 07:09:56 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:09:22.957 07:09:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:09:22.957 07:09:56 -- accel/accel.sh@12 -- # build_accel_config 00:09:22.957 07:09:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:22.957 07:09:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:22.957 07:09:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:22.957 07:09:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:22.957 07:09:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:22.957 07:09:56 -- accel/accel.sh@41 -- # local IFS=, 00:09:22.957 07:09:56 -- accel/accel.sh@42 -- # jq -r . 00:09:23.217 Unsupported workload type: foobar 00:09:23.217 [2024-02-13 07:09:56.669175] app.c:1290:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:09:23.217 accel_perf options: 00:09:23.217 [-h help message] 00:09:23.217 [-q queue depth per core] 00:09:23.217 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:23.217 [-T number of threads per core 00:09:23.217 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:23.217 [-t time in seconds] 00:09:23.217 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:23.217 [ dif_verify, , dif_generate, dif_generate_copy 00:09:23.217 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:23.217 [-l for compress/decompress workloads, name of uncompressed input file 00:09:23.217 [-S for crc32c workload, use this seed value (default 0) 00:09:23.217 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:23.217 [-f for fill workload, use this BYTE value (default 255) 00:09:23.217 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:23.217 [-y verify result if this switch is on] 00:09:23.217 [-a tasks to allocate per core (default: same value as -q)] 00:09:23.217 Can be used to spread operations across a wider range of memory. 00:09:23.217 07:09:56 -- common/autotest_common.sh@641 -- # es=1 00:09:23.217 07:09:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:23.217 07:09:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:23.217 07:09:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:23.217 00:09:23.217 real 0m0.068s 00:09:23.217 user 0m0.099s 00:09:23.217 sys 0m0.026s 00:09:23.217 07:09:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:23.217 07:09:56 -- common/autotest_common.sh@10 -- # set +x 00:09:23.217 ************************************ 00:09:23.217 END TEST accel_wrong_workload 00:09:23.217 ************************************ 00:09:23.217 07:09:56 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:09:23.217 07:09:56 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:09:23.217 07:09:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:23.217 07:09:56 -- common/autotest_common.sh@10 -- # set +x 00:09:23.217 ************************************ 00:09:23.217 START TEST accel_negative_buffers 00:09:23.217 ************************************ 00:09:23.217 07:09:56 -- common/autotest_common.sh@1102 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:09:23.217 07:09:56 -- common/autotest_common.sh@638 -- # local es=0 00:09:23.217 07:09:56 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:09:23.217 07:09:56 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:09:23.217 07:09:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:23.217 07:09:56 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:09:23.217 07:09:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:23.217 07:09:56 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:09:23.217 07:09:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:09:23.217 07:09:56 -- accel/accel.sh@12 -- # build_accel_config 00:09:23.217 07:09:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:23.217 07:09:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:23.217 07:09:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:23.217 07:09:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:23.217 07:09:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:23.217 07:09:56 -- accel/accel.sh@41 -- # local IFS=, 00:09:23.217 07:09:56 -- accel/accel.sh@42 -- # jq -r . 00:09:23.217 -x option must be non-negative. 00:09:23.217 [2024-02-13 07:09:56.792667] app.c:1290:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:09:23.217 accel_perf options: 00:09:23.217 [-h help message] 00:09:23.217 [-q queue depth per core] 00:09:23.217 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:09:23.217 [-T number of threads per core 00:09:23.217 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:09:23.217 [-t time in seconds] 00:09:23.217 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:09:23.217 [ dif_verify, , dif_generate, dif_generate_copy 00:09:23.217 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:09:23.217 [-l for compress/decompress workloads, name of uncompressed input file 00:09:23.217 [-S for crc32c workload, use this seed value (default 0) 00:09:23.217 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:09:23.217 [-f for fill workload, use this BYTE value (default 255) 00:09:23.217 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:09:23.217 [-y verify result if this switch is on] 00:09:23.217 [-a tasks to allocate per core (default: same value as -q)] 00:09:23.217 Can be used to spread operations across a wider range of memory. 00:09:23.217 07:09:56 -- common/autotest_common.sh@641 -- # es=1 00:09:23.217 07:09:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:23.217 07:09:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:23.217 07:09:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:23.217 00:09:23.217 real 0m0.072s 00:09:23.217 user 0m0.079s 00:09:23.217 sys 0m0.052s 00:09:23.217 07:09:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:23.217 ************************************ 00:09:23.217 END TEST accel_negative_buffers 00:09:23.217 ************************************ 00:09:23.217 07:09:56 -- common/autotest_common.sh@10 -- # set +x 00:09:23.217 07:09:56 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:09:23.217 07:09:56 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:09:23.217 07:09:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:23.217 07:09:56 -- common/autotest_common.sh@10 -- # set +x 00:09:23.217 ************************************ 00:09:23.217 START TEST accel_crc32c 00:09:23.217 ************************************ 00:09:23.217 07:09:56 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w crc32c -S 32 -y 00:09:23.217 07:09:56 -- accel/accel.sh@16 -- # local accel_opc 00:09:23.217 07:09:56 -- accel/accel.sh@17 -- # local accel_module 00:09:23.217 07:09:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:23.217 07:09:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:23.217 07:09:56 -- accel/accel.sh@12 -- # build_accel_config 00:09:23.217 07:09:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:23.217 07:09:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:23.217 07:09:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:23.217 07:09:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:23.217 07:09:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:23.217 07:09:56 -- accel/accel.sh@41 -- # local IFS=, 00:09:23.217 07:09:56 -- accel/accel.sh@42 -- # jq -r . 00:09:23.476 [2024-02-13 07:09:56.915183] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:23.476 [2024-02-13 07:09:56.915404] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109981 ] 00:09:23.476 [2024-02-13 07:09:57.091044] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.735 [2024-02-13 07:09:57.326819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.735 [2024-02-13 07:09:57.327073] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:25.113 [2024-02-13 07:09:58.567136] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:26.071 07:09:59 -- accel/accel.sh@18 -- # out=' 00:09:26.071 SPDK Configuration: 00:09:26.071 Core mask: 0x1 00:09:26.071 00:09:26.071 Accel Perf Configuration: 00:09:26.071 Workload Type: crc32c 00:09:26.071 CRC-32C seed: 32 00:09:26.071 Transfer size: 4096 bytes 00:09:26.071 Vector count 1 00:09:26.071 Module: software 00:09:26.071 Queue depth: 32 00:09:26.071 Allocate depth: 32 00:09:26.071 # threads/core: 1 00:09:26.071 Run time: 1 seconds 00:09:26.071 Verify: Yes 00:09:26.071 00:09:26.071 Running for 1 seconds... 00:09:26.071 00:09:26.071 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:26.071 ------------------------------------------------------------------------------------ 00:09:26.071 0,0 411456/s 1607 MiB/s 0 0 00:09:26.071 ==================================================================================== 00:09:26.071 Total 411456/s 1607 MiB/s 0 0' 00:09:26.071 07:09:59 -- accel/accel.sh@20 -- # IFS=: 00:09:26.071 07:09:59 -- accel/accel.sh@20 -- # read -r var val 00:09:26.071 07:09:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:09:26.071 07:09:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:09:26.071 07:09:59 -- accel/accel.sh@12 -- # build_accel_config 00:09:26.071 07:09:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:26.071 07:09:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:26.071 07:09:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:26.071 07:09:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:26.071 07:09:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:26.071 07:09:59 -- accel/accel.sh@41 -- # local IFS=, 00:09:26.071 07:09:59 -- accel/accel.sh@42 -- # jq -r . 00:09:26.071 [2024-02-13 07:09:59.557436] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:26.071 [2024-02-13 07:09:59.558550] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110022 ] 00:09:26.071 [2024-02-13 07:09:59.735938] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.340 [2024-02-13 07:09:59.991944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.340 [2024-02-13 07:09:59.992145] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:26.598 07:10:00 -- accel/accel.sh@21 -- # val= 00:09:26.598 07:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # IFS=: 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # read -r var val 00:09:26.598 07:10:00 -- accel/accel.sh@21 -- # val= 00:09:26.598 07:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # IFS=: 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # read -r var val 00:09:26.598 07:10:00 -- accel/accel.sh@21 -- # val=0x1 00:09:26.598 07:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # IFS=: 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # read -r var val 00:09:26.598 07:10:00 -- accel/accel.sh@21 -- # val= 00:09:26.598 07:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # IFS=: 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # read -r var val 00:09:26.598 07:10:00 -- accel/accel.sh@21 -- # val= 00:09:26.598 07:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # IFS=: 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # read -r var val 00:09:26.598 07:10:00 -- accel/accel.sh@21 -- # val=crc32c 00:09:26.598 07:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.598 07:10:00 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # IFS=: 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # read -r var val 00:09:26.598 07:10:00 -- accel/accel.sh@21 -- # val=32 00:09:26.598 07:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # IFS=: 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # read -r var val 00:09:26.598 07:10:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:26.598 07:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # IFS=: 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # read -r var val 00:09:26.598 07:10:00 -- accel/accel.sh@21 -- # val= 00:09:26.598 07:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # IFS=: 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # read -r var val 00:09:26.598 07:10:00 -- accel/accel.sh@21 -- # val=software 00:09:26.598 07:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.598 07:10:00 -- accel/accel.sh@23 -- # accel_module=software 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # IFS=: 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # read -r var val 00:09:26.598 07:10:00 -- accel/accel.sh@21 -- # val=32 00:09:26.598 07:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # IFS=: 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # read -r var val 00:09:26.598 07:10:00 -- accel/accel.sh@21 -- # val=32 00:09:26.598 07:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # IFS=: 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # read -r var val 00:09:26.598 07:10:00 -- accel/accel.sh@21 -- # val=1 00:09:26.598 07:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # IFS=: 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # read -r var val 00:09:26.598 07:10:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:26.598 07:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # IFS=: 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # read -r var val 00:09:26.598 07:10:00 -- accel/accel.sh@21 -- # val=Yes 00:09:26.598 07:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # IFS=: 00:09:26.598 07:10:00 -- accel/accel.sh@20 -- # read -r var val 00:09:26.598 07:10:00 -- accel/accel.sh@21 -- # val= 00:09:26.599 07:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.599 07:10:00 -- accel/accel.sh@20 -- # IFS=: 00:09:26.599 07:10:00 -- accel/accel.sh@20 -- # read -r var val 00:09:26.599 07:10:00 -- accel/accel.sh@21 -- # val= 00:09:26.599 07:10:00 -- accel/accel.sh@22 -- # case "$var" in 00:09:26.599 07:10:00 -- accel/accel.sh@20 -- # IFS=: 00:09:26.599 07:10:00 -- accel/accel.sh@20 -- # read -r var val 00:09:27.534 [2024-02-13 07:10:01.214560] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:28.469 07:10:02 -- accel/accel.sh@21 -- # val= 00:09:28.469 07:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.469 07:10:02 -- accel/accel.sh@20 -- # IFS=: 00:09:28.469 07:10:02 -- accel/accel.sh@20 -- # read -r var val 00:09:28.469 07:10:02 -- accel/accel.sh@21 -- # val= 00:09:28.469 07:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.469 07:10:02 -- accel/accel.sh@20 -- # IFS=: 00:09:28.469 07:10:02 -- accel/accel.sh@20 -- # read -r var val 00:09:28.469 07:10:02 -- accel/accel.sh@21 -- # val= 00:09:28.469 07:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.469 07:10:02 -- accel/accel.sh@20 -- # IFS=: 00:09:28.469 07:10:02 -- accel/accel.sh@20 -- # read -r var val 00:09:28.469 07:10:02 -- accel/accel.sh@21 -- # val= 00:09:28.469 07:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.469 07:10:02 -- accel/accel.sh@20 -- # IFS=: 00:09:28.469 07:10:02 -- accel/accel.sh@20 -- # read -r var val 00:09:28.469 07:10:02 -- accel/accel.sh@21 -- # val= 00:09:28.469 07:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.469 07:10:02 -- accel/accel.sh@20 -- # IFS=: 00:09:28.469 07:10:02 -- accel/accel.sh@20 -- # read -r var val 00:09:28.469 07:10:02 -- accel/accel.sh@21 -- # val= 00:09:28.469 07:10:02 -- accel/accel.sh@22 -- # case "$var" in 00:09:28.469 07:10:02 -- accel/accel.sh@20 -- # IFS=: 00:09:28.469 07:10:02 -- accel/accel.sh@20 -- # read -r var val 00:09:28.469 07:10:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:28.469 07:10:02 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:09:28.469 07:10:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:28.469 ************************************ 00:09:28.469 END TEST accel_crc32c 00:09:28.469 ************************************ 00:09:28.469 00:09:28.469 real 0m5.289s 00:09:28.469 user 0m4.674s 00:09:28.469 sys 0m0.475s 00:09:28.469 07:10:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:28.469 07:10:02 -- common/autotest_common.sh@10 -- # set +x 00:09:28.726 07:10:02 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:09:28.726 07:10:02 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:09:28.726 07:10:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:28.726 07:10:02 -- common/autotest_common.sh@10 -- # set +x 00:09:28.726 ************************************ 00:09:28.726 START TEST accel_crc32c_C2 00:09:28.726 ************************************ 00:09:28.726 07:10:02 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w crc32c -y -C 2 00:09:28.726 07:10:02 -- accel/accel.sh@16 -- # local accel_opc 00:09:28.726 07:10:02 -- accel/accel.sh@17 -- # local accel_module 00:09:28.726 07:10:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:28.726 07:10:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:28.726 07:10:02 -- accel/accel.sh@12 -- # build_accel_config 00:09:28.726 07:10:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:28.726 07:10:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:28.726 07:10:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:28.726 07:10:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:28.726 07:10:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:28.726 07:10:02 -- accel/accel.sh@41 -- # local IFS=, 00:09:28.726 07:10:02 -- accel/accel.sh@42 -- # jq -r . 00:09:28.726 [2024-02-13 07:10:02.263217] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:28.726 [2024-02-13 07:10:02.263400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110089 ] 00:09:28.984 [2024-02-13 07:10:02.428793] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.984 [2024-02-13 07:10:02.671073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.984 [2024-02-13 07:10:02.671222] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:30.360 [2024-02-13 07:10:03.904794] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:31.296 07:10:04 -- accel/accel.sh@18 -- # out=' 00:09:31.296 SPDK Configuration: 00:09:31.296 Core mask: 0x1 00:09:31.296 00:09:31.296 Accel Perf Configuration: 00:09:31.296 Workload Type: crc32c 00:09:31.296 CRC-32C seed: 0 00:09:31.296 Transfer size: 4096 bytes 00:09:31.296 Vector count 2 00:09:31.296 Module: software 00:09:31.296 Queue depth: 32 00:09:31.296 Allocate depth: 32 00:09:31.296 # threads/core: 1 00:09:31.296 Run time: 1 seconds 00:09:31.296 Verify: Yes 00:09:31.296 00:09:31.296 Running for 1 seconds... 00:09:31.296 00:09:31.296 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:31.296 ------------------------------------------------------------------------------------ 00:09:31.296 0,0 303392/s 2370 MiB/s 0 0 00:09:31.296 ==================================================================================== 00:09:31.296 Total 303392/s 1185 MiB/s 0 0' 00:09:31.296 07:10:04 -- accel/accel.sh@20 -- # IFS=: 00:09:31.296 07:10:04 -- accel/accel.sh@20 -- # read -r var val 00:09:31.296 07:10:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:09:31.296 07:10:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:09:31.296 07:10:04 -- accel/accel.sh@12 -- # build_accel_config 00:09:31.296 07:10:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:31.296 07:10:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:31.296 07:10:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:31.296 07:10:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:31.296 07:10:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:31.296 07:10:04 -- accel/accel.sh@41 -- # local IFS=, 00:09:31.296 07:10:04 -- accel/accel.sh@42 -- # jq -r . 00:09:31.296 [2024-02-13 07:10:04.890206] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:31.296 [2024-02-13 07:10:04.890427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110130 ] 00:09:31.554 [2024-02-13 07:10:05.058862] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.813 [2024-02-13 07:10:05.302837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.813 [2024-02-13 07:10:05.303001] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:32.071 07:10:05 -- accel/accel.sh@21 -- # val= 00:09:32.071 07:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # IFS=: 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # read -r var val 00:09:32.071 07:10:05 -- accel/accel.sh@21 -- # val= 00:09:32.071 07:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # IFS=: 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # read -r var val 00:09:32.071 07:10:05 -- accel/accel.sh@21 -- # val=0x1 00:09:32.071 07:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # IFS=: 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # read -r var val 00:09:32.071 07:10:05 -- accel/accel.sh@21 -- # val= 00:09:32.071 07:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # IFS=: 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # read -r var val 00:09:32.071 07:10:05 -- accel/accel.sh@21 -- # val= 00:09:32.071 07:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # IFS=: 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # read -r var val 00:09:32.071 07:10:05 -- accel/accel.sh@21 -- # val=crc32c 00:09:32.071 07:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.071 07:10:05 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # IFS=: 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # read -r var val 00:09:32.071 07:10:05 -- accel/accel.sh@21 -- # val=0 00:09:32.071 07:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # IFS=: 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # read -r var val 00:09:32.071 07:10:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:32.071 07:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # IFS=: 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # read -r var val 00:09:32.071 07:10:05 -- accel/accel.sh@21 -- # val= 00:09:32.071 07:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # IFS=: 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # read -r var val 00:09:32.071 07:10:05 -- accel/accel.sh@21 -- # val=software 00:09:32.071 07:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.071 07:10:05 -- accel/accel.sh@23 -- # accel_module=software 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # IFS=: 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # read -r var val 00:09:32.071 07:10:05 -- accel/accel.sh@21 -- # val=32 00:09:32.071 07:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # IFS=: 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # read -r var val 00:09:32.071 07:10:05 -- accel/accel.sh@21 -- # val=32 00:09:32.071 07:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # IFS=: 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # read -r var val 00:09:32.071 07:10:05 -- accel/accel.sh@21 -- # val=1 00:09:32.071 07:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # IFS=: 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # read -r var val 00:09:32.071 07:10:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:32.071 07:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # IFS=: 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # read -r var val 00:09:32.071 07:10:05 -- accel/accel.sh@21 -- # val=Yes 00:09:32.071 07:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # IFS=: 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # read -r var val 00:09:32.071 07:10:05 -- accel/accel.sh@21 -- # val= 00:09:32.071 07:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # IFS=: 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # read -r var val 00:09:32.071 07:10:05 -- accel/accel.sh@21 -- # val= 00:09:32.071 07:10:05 -- accel/accel.sh@22 -- # case "$var" in 00:09:32.071 07:10:05 -- accel/accel.sh@20 -- # IFS=: 00:09:32.072 07:10:05 -- accel/accel.sh@20 -- # read -r var val 00:09:33.059 [2024-02-13 07:10:06.541968] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:33.996 07:10:07 -- accel/accel.sh@21 -- # val= 00:09:33.996 07:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:33.996 07:10:07 -- accel/accel.sh@20 -- # IFS=: 00:09:33.996 07:10:07 -- accel/accel.sh@20 -- # read -r var val 00:09:33.996 07:10:07 -- accel/accel.sh@21 -- # val= 00:09:33.996 07:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:33.996 07:10:07 -- accel/accel.sh@20 -- # IFS=: 00:09:33.996 07:10:07 -- accel/accel.sh@20 -- # read -r var val 00:09:33.996 07:10:07 -- accel/accel.sh@21 -- # val= 00:09:33.996 07:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:33.996 07:10:07 -- accel/accel.sh@20 -- # IFS=: 00:09:33.996 07:10:07 -- accel/accel.sh@20 -- # read -r var val 00:09:33.996 07:10:07 -- accel/accel.sh@21 -- # val= 00:09:33.996 07:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:33.996 07:10:07 -- accel/accel.sh@20 -- # IFS=: 00:09:33.996 07:10:07 -- accel/accel.sh@20 -- # read -r var val 00:09:33.996 07:10:07 -- accel/accel.sh@21 -- # val= 00:09:33.996 07:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:33.996 07:10:07 -- accel/accel.sh@20 -- # IFS=: 00:09:33.996 07:10:07 -- accel/accel.sh@20 -- # read -r var val 00:09:33.996 07:10:07 -- accel/accel.sh@21 -- # val= 00:09:33.996 07:10:07 -- accel/accel.sh@22 -- # case "$var" in 00:09:33.996 07:10:07 -- accel/accel.sh@20 -- # IFS=: 00:09:33.996 07:10:07 -- accel/accel.sh@20 -- # read -r var val 00:09:33.996 07:10:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:33.996 07:10:07 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:09:33.996 07:10:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:33.996 00:09:33.996 real 0m5.274s 00:09:33.996 user 0m4.744s 00:09:33.996 sys 0m0.401s 00:09:33.996 07:10:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:33.996 ************************************ 00:09:33.996 END TEST accel_crc32c_C2 00:09:33.996 ************************************ 00:09:33.996 07:10:07 -- common/autotest_common.sh@10 -- # set +x 00:09:33.996 07:10:07 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:09:33.996 07:10:07 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:09:33.996 07:10:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:33.996 07:10:07 -- common/autotest_common.sh@10 -- # set +x 00:09:33.996 ************************************ 00:09:33.996 START TEST accel_copy 00:09:33.996 ************************************ 00:09:33.996 07:10:07 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy -y 00:09:33.996 07:10:07 -- accel/accel.sh@16 -- # local accel_opc 00:09:33.996 07:10:07 -- accel/accel.sh@17 -- # local accel_module 00:09:33.996 07:10:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:09:33.996 07:10:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:33.996 07:10:07 -- accel/accel.sh@12 -- # build_accel_config 00:09:33.996 07:10:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:33.996 07:10:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:33.996 07:10:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:33.996 07:10:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:33.996 07:10:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:33.996 07:10:07 -- accel/accel.sh@41 -- # local IFS=, 00:09:33.996 07:10:07 -- accel/accel.sh@42 -- # jq -r . 00:09:33.996 [2024-02-13 07:10:07.602858] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:33.996 [2024-02-13 07:10:07.603836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110189 ] 00:09:34.254 [2024-02-13 07:10:07.780260] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.513 [2024-02-13 07:10:08.029781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.513 [2024-02-13 07:10:08.029988] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:35.890 [2024-02-13 07:10:09.267263] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:36.833 07:10:10 -- accel/accel.sh@18 -- # out=' 00:09:36.833 SPDK Configuration: 00:09:36.833 Core mask: 0x1 00:09:36.833 00:09:36.833 Accel Perf Configuration: 00:09:36.833 Workload Type: copy 00:09:36.833 Transfer size: 4096 bytes 00:09:36.833 Vector count 1 00:09:36.833 Module: software 00:09:36.833 Queue depth: 32 00:09:36.833 Allocate depth: 32 00:09:36.833 # threads/core: 1 00:09:36.833 Run time: 1 seconds 00:09:36.833 Verify: Yes 00:09:36.833 00:09:36.833 Running for 1 seconds... 00:09:36.833 00:09:36.833 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:36.833 ------------------------------------------------------------------------------------ 00:09:36.833 0,0 236640/s 924 MiB/s 0 0 00:09:36.833 ==================================================================================== 00:09:36.833 Total 236640/s 924 MiB/s 0 0' 00:09:36.833 07:10:10 -- accel/accel.sh@20 -- # IFS=: 00:09:36.833 07:10:10 -- accel/accel.sh@20 -- # read -r var val 00:09:36.834 07:10:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:09:36.834 07:10:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:09:36.834 07:10:10 -- accel/accel.sh@12 -- # build_accel_config 00:09:36.834 07:10:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:36.834 07:10:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:36.834 07:10:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:36.834 07:10:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:36.834 07:10:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:36.834 07:10:10 -- accel/accel.sh@41 -- # local IFS=, 00:09:36.834 07:10:10 -- accel/accel.sh@42 -- # jq -r . 00:09:36.834 [2024-02-13 07:10:10.256927] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:36.834 [2024-02-13 07:10:10.258067] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110223 ] 00:09:36.834 [2024-02-13 07:10:10.435178] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.096 [2024-02-13 07:10:10.684824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.096 [2024-02-13 07:10:10.684954] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:37.358 07:10:10 -- accel/accel.sh@21 -- # val= 00:09:37.358 07:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # IFS=: 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # read -r var val 00:09:37.358 07:10:10 -- accel/accel.sh@21 -- # val= 00:09:37.358 07:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # IFS=: 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # read -r var val 00:09:37.358 07:10:10 -- accel/accel.sh@21 -- # val=0x1 00:09:37.358 07:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # IFS=: 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # read -r var val 00:09:37.358 07:10:10 -- accel/accel.sh@21 -- # val= 00:09:37.358 07:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # IFS=: 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # read -r var val 00:09:37.358 07:10:10 -- accel/accel.sh@21 -- # val= 00:09:37.358 07:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # IFS=: 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # read -r var val 00:09:37.358 07:10:10 -- accel/accel.sh@21 -- # val=copy 00:09:37.358 07:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.358 07:10:10 -- accel/accel.sh@24 -- # accel_opc=copy 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # IFS=: 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # read -r var val 00:09:37.358 07:10:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:37.358 07:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # IFS=: 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # read -r var val 00:09:37.358 07:10:10 -- accel/accel.sh@21 -- # val= 00:09:37.358 07:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # IFS=: 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # read -r var val 00:09:37.358 07:10:10 -- accel/accel.sh@21 -- # val=software 00:09:37.358 07:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.358 07:10:10 -- accel/accel.sh@23 -- # accel_module=software 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # IFS=: 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # read -r var val 00:09:37.358 07:10:10 -- accel/accel.sh@21 -- # val=32 00:09:37.358 07:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.358 07:10:10 -- accel/accel.sh@20 -- # IFS=: 00:09:37.359 07:10:10 -- accel/accel.sh@20 -- # read -r var val 00:09:37.359 07:10:10 -- accel/accel.sh@21 -- # val=32 00:09:37.359 07:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.359 07:10:10 -- accel/accel.sh@20 -- # IFS=: 00:09:37.359 07:10:10 -- accel/accel.sh@20 -- # read -r var val 00:09:37.359 07:10:10 -- accel/accel.sh@21 -- # val=1 00:09:37.359 07:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.359 07:10:10 -- accel/accel.sh@20 -- # IFS=: 00:09:37.359 07:10:10 -- accel/accel.sh@20 -- # read -r var val 00:09:37.359 07:10:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:37.359 07:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.359 07:10:10 -- accel/accel.sh@20 -- # IFS=: 00:09:37.359 07:10:10 -- accel/accel.sh@20 -- # read -r var val 00:09:37.359 07:10:10 -- accel/accel.sh@21 -- # val=Yes 00:09:37.359 07:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.359 07:10:10 -- accel/accel.sh@20 -- # IFS=: 00:09:37.359 07:10:10 -- accel/accel.sh@20 -- # read -r var val 00:09:37.359 07:10:10 -- accel/accel.sh@21 -- # val= 00:09:37.359 07:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.359 07:10:10 -- accel/accel.sh@20 -- # IFS=: 00:09:37.359 07:10:10 -- accel/accel.sh@20 -- # read -r var val 00:09:37.359 07:10:10 -- accel/accel.sh@21 -- # val= 00:09:37.359 07:10:10 -- accel/accel.sh@22 -- # case "$var" in 00:09:37.359 07:10:10 -- accel/accel.sh@20 -- # IFS=: 00:09:37.359 07:10:10 -- accel/accel.sh@20 -- # read -r var val 00:09:38.296 [2024-02-13 07:10:11.927413] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:39.232 07:10:12 -- accel/accel.sh@21 -- # val= 00:09:39.232 07:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.232 07:10:12 -- accel/accel.sh@20 -- # IFS=: 00:09:39.232 07:10:12 -- accel/accel.sh@20 -- # read -r var val 00:09:39.232 07:10:12 -- accel/accel.sh@21 -- # val= 00:09:39.232 07:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.232 07:10:12 -- accel/accel.sh@20 -- # IFS=: 00:09:39.232 07:10:12 -- accel/accel.sh@20 -- # read -r var val 00:09:39.232 07:10:12 -- accel/accel.sh@21 -- # val= 00:09:39.232 07:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.232 07:10:12 -- accel/accel.sh@20 -- # IFS=: 00:09:39.232 07:10:12 -- accel/accel.sh@20 -- # read -r var val 00:09:39.232 07:10:12 -- accel/accel.sh@21 -- # val= 00:09:39.232 07:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.232 07:10:12 -- accel/accel.sh@20 -- # IFS=: 00:09:39.232 07:10:12 -- accel/accel.sh@20 -- # read -r var val 00:09:39.232 07:10:12 -- accel/accel.sh@21 -- # val= 00:09:39.232 07:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.232 07:10:12 -- accel/accel.sh@20 -- # IFS=: 00:09:39.232 07:10:12 -- accel/accel.sh@20 -- # read -r var val 00:09:39.232 07:10:12 -- accel/accel.sh@21 -- # val= 00:09:39.232 07:10:12 -- accel/accel.sh@22 -- # case "$var" in 00:09:39.232 07:10:12 -- accel/accel.sh@20 -- # IFS=: 00:09:39.232 07:10:12 -- accel/accel.sh@20 -- # read -r var val 00:09:39.232 07:10:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:39.232 07:10:12 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:09:39.232 07:10:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:39.232 00:09:39.232 real 0m5.324s 00:09:39.232 user 0m4.744s 00:09:39.232 sys 0m0.449s 00:09:39.232 07:10:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:39.232 ************************************ 00:09:39.232 END TEST accel_copy 00:09:39.232 ************************************ 00:09:39.232 07:10:12 -- common/autotest_common.sh@10 -- # set +x 00:09:39.232 07:10:12 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:39.232 07:10:12 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:09:39.232 07:10:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:39.232 07:10:12 -- common/autotest_common.sh@10 -- # set +x 00:09:39.491 ************************************ 00:09:39.491 START TEST accel_fill 00:09:39.491 ************************************ 00:09:39.491 07:10:12 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:39.491 07:10:12 -- accel/accel.sh@16 -- # local accel_opc 00:09:39.491 07:10:12 -- accel/accel.sh@17 -- # local accel_module 00:09:39.491 07:10:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:39.491 07:10:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:39.491 07:10:12 -- accel/accel.sh@12 -- # build_accel_config 00:09:39.491 07:10:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:39.491 07:10:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:39.491 07:10:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:39.491 07:10:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:39.491 07:10:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:39.491 07:10:12 -- accel/accel.sh@41 -- # local IFS=, 00:09:39.491 07:10:12 -- accel/accel.sh@42 -- # jq -r . 00:09:39.491 [2024-02-13 07:10:12.995956] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:39.491 [2024-02-13 07:10:12.996358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110299 ] 00:09:39.491 [2024-02-13 07:10:13.175535] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.750 [2024-02-13 07:10:13.418969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.750 [2024-02-13 07:10:13.419135] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:41.128 [2024-02-13 07:10:14.653734] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:42.064 07:10:15 -- accel/accel.sh@18 -- # out=' 00:09:42.064 SPDK Configuration: 00:09:42.064 Core mask: 0x1 00:09:42.064 00:09:42.064 Accel Perf Configuration: 00:09:42.064 Workload Type: fill 00:09:42.064 Fill pattern: 0x80 00:09:42.064 Transfer size: 4096 bytes 00:09:42.064 Vector count 1 00:09:42.064 Module: software 00:09:42.064 Queue depth: 64 00:09:42.064 Allocate depth: 64 00:09:42.064 # threads/core: 1 00:09:42.064 Run time: 1 seconds 00:09:42.064 Verify: Yes 00:09:42.064 00:09:42.064 Running for 1 seconds... 00:09:42.064 00:09:42.064 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:42.064 ------------------------------------------------------------------------------------ 00:09:42.064 0,0 367872/s 1437 MiB/s 0 0 00:09:42.064 ==================================================================================== 00:09:42.064 Total 367872/s 1437 MiB/s 0 0' 00:09:42.064 07:10:15 -- accel/accel.sh@20 -- # IFS=: 00:09:42.064 07:10:15 -- accel/accel.sh@20 -- # read -r var val 00:09:42.064 07:10:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:42.064 07:10:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:09:42.064 07:10:15 -- accel/accel.sh@12 -- # build_accel_config 00:09:42.064 07:10:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:42.064 07:10:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:42.064 07:10:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:42.064 07:10:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:42.064 07:10:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:42.064 07:10:15 -- accel/accel.sh@41 -- # local IFS=, 00:09:42.064 07:10:15 -- accel/accel.sh@42 -- # jq -r . 00:09:42.064 [2024-02-13 07:10:15.652933] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:42.064 [2024-02-13 07:10:15.653228] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110331 ] 00:09:42.324 [2024-02-13 07:10:15.828830] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.583 [2024-02-13 07:10:16.067843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.583 [2024-02-13 07:10:16.068047] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:42.842 07:10:16 -- accel/accel.sh@21 -- # val= 00:09:42.842 07:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.842 07:10:16 -- accel/accel.sh@20 -- # IFS=: 00:09:42.842 07:10:16 -- accel/accel.sh@20 -- # read -r var val 00:09:42.842 07:10:16 -- accel/accel.sh@21 -- # val= 00:09:42.842 07:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.842 07:10:16 -- accel/accel.sh@20 -- # IFS=: 00:09:42.842 07:10:16 -- accel/accel.sh@20 -- # read -r var val 00:09:42.842 07:10:16 -- accel/accel.sh@21 -- # val=0x1 00:09:42.842 07:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.842 07:10:16 -- accel/accel.sh@20 -- # IFS=: 00:09:42.842 07:10:16 -- accel/accel.sh@20 -- # read -r var val 00:09:42.842 07:10:16 -- accel/accel.sh@21 -- # val= 00:09:42.842 07:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.842 07:10:16 -- accel/accel.sh@20 -- # IFS=: 00:09:42.842 07:10:16 -- accel/accel.sh@20 -- # read -r var val 00:09:42.842 07:10:16 -- accel/accel.sh@21 -- # val= 00:09:42.842 07:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.842 07:10:16 -- accel/accel.sh@20 -- # IFS=: 00:09:42.842 07:10:16 -- accel/accel.sh@20 -- # read -r var val 00:09:42.842 07:10:16 -- accel/accel.sh@21 -- # val=fill 00:09:42.842 07:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.842 07:10:16 -- accel/accel.sh@24 -- # accel_opc=fill 00:09:42.842 07:10:16 -- accel/accel.sh@20 -- # IFS=: 00:09:42.842 07:10:16 -- accel/accel.sh@20 -- # read -r var val 00:09:42.842 07:10:16 -- accel/accel.sh@21 -- # val=0x80 00:09:42.842 07:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.842 07:10:16 -- accel/accel.sh@20 -- # IFS=: 00:09:42.842 07:10:16 -- accel/accel.sh@20 -- # read -r var val 00:09:42.842 07:10:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:42.842 07:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.842 07:10:16 -- accel/accel.sh@20 -- # IFS=: 00:09:42.842 07:10:16 -- accel/accel.sh@20 -- # read -r var val 00:09:42.843 07:10:16 -- accel/accel.sh@21 -- # val= 00:09:42.843 07:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # IFS=: 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # read -r var val 00:09:42.843 07:10:16 -- accel/accel.sh@21 -- # val=software 00:09:42.843 07:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.843 07:10:16 -- accel/accel.sh@23 -- # accel_module=software 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # IFS=: 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # read -r var val 00:09:42.843 07:10:16 -- accel/accel.sh@21 -- # val=64 00:09:42.843 07:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # IFS=: 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # read -r var val 00:09:42.843 07:10:16 -- accel/accel.sh@21 -- # val=64 00:09:42.843 07:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # IFS=: 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # read -r var val 00:09:42.843 07:10:16 -- accel/accel.sh@21 -- # val=1 00:09:42.843 07:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # IFS=: 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # read -r var val 00:09:42.843 07:10:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:42.843 07:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # IFS=: 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # read -r var val 00:09:42.843 07:10:16 -- accel/accel.sh@21 -- # val=Yes 00:09:42.843 07:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # IFS=: 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # read -r var val 00:09:42.843 07:10:16 -- accel/accel.sh@21 -- # val= 00:09:42.843 07:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # IFS=: 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # read -r var val 00:09:42.843 07:10:16 -- accel/accel.sh@21 -- # val= 00:09:42.843 07:10:16 -- accel/accel.sh@22 -- # case "$var" in 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # IFS=: 00:09:42.843 07:10:16 -- accel/accel.sh@20 -- # read -r var val 00:09:43.779 [2024-02-13 07:10:17.300901] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:44.716 07:10:18 -- accel/accel.sh@21 -- # val= 00:09:44.716 07:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.716 07:10:18 -- accel/accel.sh@20 -- # IFS=: 00:09:44.716 07:10:18 -- accel/accel.sh@20 -- # read -r var val 00:09:44.716 07:10:18 -- accel/accel.sh@21 -- # val= 00:09:44.716 07:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.717 07:10:18 -- accel/accel.sh@20 -- # IFS=: 00:09:44.717 07:10:18 -- accel/accel.sh@20 -- # read -r var val 00:09:44.717 07:10:18 -- accel/accel.sh@21 -- # val= 00:09:44.717 07:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.717 07:10:18 -- accel/accel.sh@20 -- # IFS=: 00:09:44.717 07:10:18 -- accel/accel.sh@20 -- # read -r var val 00:09:44.717 07:10:18 -- accel/accel.sh@21 -- # val= 00:09:44.717 07:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.717 07:10:18 -- accel/accel.sh@20 -- # IFS=: 00:09:44.717 07:10:18 -- accel/accel.sh@20 -- # read -r var val 00:09:44.717 07:10:18 -- accel/accel.sh@21 -- # val= 00:09:44.717 07:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.717 07:10:18 -- accel/accel.sh@20 -- # IFS=: 00:09:44.717 07:10:18 -- accel/accel.sh@20 -- # read -r var val 00:09:44.717 07:10:18 -- accel/accel.sh@21 -- # val= 00:09:44.717 07:10:18 -- accel/accel.sh@22 -- # case "$var" in 00:09:44.717 07:10:18 -- accel/accel.sh@20 -- # IFS=: 00:09:44.717 07:10:18 -- accel/accel.sh@20 -- # read -r var val 00:09:44.717 ************************************ 00:09:44.717 END TEST accel_fill 00:09:44.717 ************************************ 00:09:44.717 07:10:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:44.717 07:10:18 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:09:44.717 07:10:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:44.717 00:09:44.717 real 0m5.305s 00:09:44.717 user 0m4.726s 00:09:44.717 sys 0m0.453s 00:09:44.717 07:10:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:44.717 07:10:18 -- common/autotest_common.sh@10 -- # set +x 00:09:44.717 07:10:18 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:09:44.717 07:10:18 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:09:44.717 07:10:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:44.717 07:10:18 -- common/autotest_common.sh@10 -- # set +x 00:09:44.717 ************************************ 00:09:44.717 START TEST accel_copy_crc32c 00:09:44.717 ************************************ 00:09:44.717 07:10:18 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy_crc32c -y 00:09:44.717 07:10:18 -- accel/accel.sh@16 -- # local accel_opc 00:09:44.717 07:10:18 -- accel/accel.sh@17 -- # local accel_module 00:09:44.717 07:10:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:44.717 07:10:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:44.717 07:10:18 -- accel/accel.sh@12 -- # build_accel_config 00:09:44.717 07:10:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:44.717 07:10:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:44.717 07:10:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:44.717 07:10:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:44.717 07:10:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:44.717 07:10:18 -- accel/accel.sh@41 -- # local IFS=, 00:09:44.717 07:10:18 -- accel/accel.sh@42 -- # jq -r . 00:09:44.717 [2024-02-13 07:10:18.345033] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:44.717 [2024-02-13 07:10:18.345543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110385 ] 00:09:44.976 [2024-02-13 07:10:18.525262] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.235 [2024-02-13 07:10:18.799823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.235 [2024-02-13 07:10:18.800395] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:46.612 [2024-02-13 07:10:20.039784] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:47.578 07:10:20 -- accel/accel.sh@18 -- # out=' 00:09:47.578 SPDK Configuration: 00:09:47.578 Core mask: 0x1 00:09:47.578 00:09:47.578 Accel Perf Configuration: 00:09:47.578 Workload Type: copy_crc32c 00:09:47.578 CRC-32C seed: 0 00:09:47.578 Vector size: 4096 bytes 00:09:47.578 Transfer size: 4096 bytes 00:09:47.578 Vector count 1 00:09:47.578 Module: software 00:09:47.578 Queue depth: 32 00:09:47.578 Allocate depth: 32 00:09:47.578 # threads/core: 1 00:09:47.578 Run time: 1 seconds 00:09:47.578 Verify: Yes 00:09:47.578 00:09:47.578 Running for 1 seconds... 00:09:47.578 00:09:47.578 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:47.578 ------------------------------------------------------------------------------------ 00:09:47.578 0,0 198528/s 775 MiB/s 0 0 00:09:47.578 ==================================================================================== 00:09:47.579 Total 198528/s 775 MiB/s 0 0' 00:09:47.579 07:10:20 -- accel/accel.sh@20 -- # IFS=: 00:09:47.579 07:10:20 -- accel/accel.sh@20 -- # read -r var val 00:09:47.579 07:10:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:09:47.579 07:10:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:09:47.579 07:10:20 -- accel/accel.sh@12 -- # build_accel_config 00:09:47.579 07:10:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:47.579 07:10:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:47.579 07:10:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:47.579 07:10:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:47.579 07:10:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:47.579 07:10:20 -- accel/accel.sh@41 -- # local IFS=, 00:09:47.579 07:10:20 -- accel/accel.sh@42 -- # jq -r . 00:09:47.579 [2024-02-13 07:10:21.038737] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:47.579 [2024-02-13 07:10:21.039260] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110431 ] 00:09:47.579 [2024-02-13 07:10:21.205831] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.845 [2024-02-13 07:10:21.457063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.845 [2024-02-13 07:10:21.457500] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:48.118 07:10:21 -- accel/accel.sh@21 -- # val= 00:09:48.118 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.118 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.118 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:48.118 07:10:21 -- accel/accel.sh@21 -- # val= 00:09:48.118 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.118 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.118 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:48.118 07:10:21 -- accel/accel.sh@21 -- # val=0x1 00:09:48.118 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.118 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.118 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:48.118 07:10:21 -- accel/accel.sh@21 -- # val= 00:09:48.118 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.118 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.118 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:48.118 07:10:21 -- accel/accel.sh@21 -- # val= 00:09:48.118 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.118 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.118 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:48.118 07:10:21 -- accel/accel.sh@21 -- # val=copy_crc32c 00:09:48.118 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.119 07:10:21 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:48.119 07:10:21 -- accel/accel.sh@21 -- # val=0 00:09:48.119 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:48.119 07:10:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:48.119 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:48.119 07:10:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:48.119 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:48.119 07:10:21 -- accel/accel.sh@21 -- # val= 00:09:48.119 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:48.119 07:10:21 -- accel/accel.sh@21 -- # val=software 00:09:48.119 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.119 07:10:21 -- accel/accel.sh@23 -- # accel_module=software 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:48.119 07:10:21 -- accel/accel.sh@21 -- # val=32 00:09:48.119 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:48.119 07:10:21 -- accel/accel.sh@21 -- # val=32 00:09:48.119 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:48.119 07:10:21 -- accel/accel.sh@21 -- # val=1 00:09:48.119 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:48.119 07:10:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:48.119 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:48.119 07:10:21 -- accel/accel.sh@21 -- # val=Yes 00:09:48.119 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:48.119 07:10:21 -- accel/accel.sh@21 -- # val= 00:09:48.119 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:48.119 07:10:21 -- accel/accel.sh@21 -- # val= 00:09:48.119 07:10:21 -- accel/accel.sh@22 -- # case "$var" in 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # IFS=: 00:09:48.119 07:10:21 -- accel/accel.sh@20 -- # read -r var val 00:09:49.063 [2024-02-13 07:10:22.711163] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:50.001 07:10:23 -- accel/accel.sh@21 -- # val= 00:09:50.001 07:10:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.001 07:10:23 -- accel/accel.sh@20 -- # IFS=: 00:09:50.001 07:10:23 -- accel/accel.sh@20 -- # read -r var val 00:09:50.001 07:10:23 -- accel/accel.sh@21 -- # val= 00:09:50.001 07:10:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.001 07:10:23 -- accel/accel.sh@20 -- # IFS=: 00:09:50.001 07:10:23 -- accel/accel.sh@20 -- # read -r var val 00:09:50.001 07:10:23 -- accel/accel.sh@21 -- # val= 00:09:50.001 07:10:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.001 07:10:23 -- accel/accel.sh@20 -- # IFS=: 00:09:50.001 07:10:23 -- accel/accel.sh@20 -- # read -r var val 00:09:50.001 07:10:23 -- accel/accel.sh@21 -- # val= 00:09:50.001 07:10:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.001 07:10:23 -- accel/accel.sh@20 -- # IFS=: 00:09:50.001 07:10:23 -- accel/accel.sh@20 -- # read -r var val 00:09:50.001 07:10:23 -- accel/accel.sh@21 -- # val= 00:09:50.001 07:10:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.001 07:10:23 -- accel/accel.sh@20 -- # IFS=: 00:09:50.001 07:10:23 -- accel/accel.sh@20 -- # read -r var val 00:09:50.001 07:10:23 -- accel/accel.sh@21 -- # val= 00:09:50.001 07:10:23 -- accel/accel.sh@22 -- # case "$var" in 00:09:50.001 07:10:23 -- accel/accel.sh@20 -- # IFS=: 00:09:50.001 07:10:23 -- accel/accel.sh@20 -- # read -r var val 00:09:50.001 ************************************ 00:09:50.001 END TEST accel_copy_crc32c 00:09:50.001 ************************************ 00:09:50.001 07:10:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:50.001 07:10:23 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:09:50.001 07:10:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:50.001 00:09:50.001 real 0m5.352s 00:09:50.001 user 0m4.723s 00:09:50.001 sys 0m0.487s 00:09:50.001 07:10:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:50.001 07:10:23 -- common/autotest_common.sh@10 -- # set +x 00:09:50.260 07:10:23 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:09:50.260 07:10:23 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:09:50.260 07:10:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:50.260 07:10:23 -- common/autotest_common.sh@10 -- # set +x 00:09:50.260 ************************************ 00:09:50.260 START TEST accel_copy_crc32c_C2 00:09:50.260 ************************************ 00:09:50.260 07:10:23 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:09:50.260 07:10:23 -- accel/accel.sh@16 -- # local accel_opc 00:09:50.260 07:10:23 -- accel/accel.sh@17 -- # local accel_module 00:09:50.260 07:10:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:50.260 07:10:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:50.260 07:10:23 -- accel/accel.sh@12 -- # build_accel_config 00:09:50.260 07:10:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:50.260 07:10:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:50.260 07:10:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:50.260 07:10:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:50.260 07:10:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:50.260 07:10:23 -- accel/accel.sh@41 -- # local IFS=, 00:09:50.260 07:10:23 -- accel/accel.sh@42 -- # jq -r . 00:09:50.260 [2024-02-13 07:10:23.741278] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:50.260 [2024-02-13 07:10:23.741476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110495 ] 00:09:50.260 [2024-02-13 07:10:23.908781] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.519 [2024-02-13 07:10:24.144198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.519 [2024-02-13 07:10:24.144353] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:51.896 [2024-02-13 07:10:25.380970] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:52.834 07:10:26 -- accel/accel.sh@18 -- # out=' 00:09:52.834 SPDK Configuration: 00:09:52.834 Core mask: 0x1 00:09:52.834 00:09:52.834 Accel Perf Configuration: 00:09:52.834 Workload Type: copy_crc32c 00:09:52.834 CRC-32C seed: 0 00:09:52.834 Vector size: 4096 bytes 00:09:52.834 Transfer size: 8192 bytes 00:09:52.834 Vector count 2 00:09:52.834 Module: software 00:09:52.834 Queue depth: 32 00:09:52.834 Allocate depth: 32 00:09:52.834 # threads/core: 1 00:09:52.834 Run time: 1 seconds 00:09:52.834 Verify: Yes 00:09:52.834 00:09:52.834 Running for 1 seconds... 00:09:52.834 00:09:52.834 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:52.834 ------------------------------------------------------------------------------------ 00:09:52.834 0,0 138944/s 1085 MiB/s 0 0 00:09:52.834 ==================================================================================== 00:09:52.834 Total 138944/s 542 MiB/s 0 0' 00:09:52.834 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:52.834 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:52.834 07:10:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:09:52.834 07:10:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:09:52.834 07:10:26 -- accel/accel.sh@12 -- # build_accel_config 00:09:52.834 07:10:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:52.834 07:10:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:52.834 07:10:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:52.834 07:10:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:52.834 07:10:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:52.834 07:10:26 -- accel/accel.sh@41 -- # local IFS=, 00:09:52.834 07:10:26 -- accel/accel.sh@42 -- # jq -r . 00:09:52.834 [2024-02-13 07:10:26.324513] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:52.834 [2024-02-13 07:10:26.325490] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110541 ] 00:09:52.834 [2024-02-13 07:10:26.494057] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.094 [2024-02-13 07:10:26.679506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.094 [2024-02-13 07:10:26.679712] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val= 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val= 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val=0x1 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val= 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val= 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val=copy_crc32c 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val=0 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val='8192 bytes' 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val= 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val=software 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@23 -- # accel_module=software 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val=32 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val=32 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val=1 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val=Yes 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val= 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:53.353 07:10:26 -- accel/accel.sh@21 -- # val= 00:09:53.353 07:10:26 -- accel/accel.sh@22 -- # case "$var" in 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # IFS=: 00:09:53.353 07:10:26 -- accel/accel.sh@20 -- # read -r var val 00:09:54.292 [2024-02-13 07:10:27.897382] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:55.228 07:10:28 -- accel/accel.sh@21 -- # val= 00:09:55.228 07:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.228 07:10:28 -- accel/accel.sh@20 -- # IFS=: 00:09:55.228 07:10:28 -- accel/accel.sh@20 -- # read -r var val 00:09:55.228 07:10:28 -- accel/accel.sh@21 -- # val= 00:09:55.228 07:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.228 07:10:28 -- accel/accel.sh@20 -- # IFS=: 00:09:55.228 07:10:28 -- accel/accel.sh@20 -- # read -r var val 00:09:55.228 07:10:28 -- accel/accel.sh@21 -- # val= 00:09:55.228 07:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.228 07:10:28 -- accel/accel.sh@20 -- # IFS=: 00:09:55.228 07:10:28 -- accel/accel.sh@20 -- # read -r var val 00:09:55.228 07:10:28 -- accel/accel.sh@21 -- # val= 00:09:55.228 07:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.228 07:10:28 -- accel/accel.sh@20 -- # IFS=: 00:09:55.228 07:10:28 -- accel/accel.sh@20 -- # read -r var val 00:09:55.228 07:10:28 -- accel/accel.sh@21 -- # val= 00:09:55.228 07:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.228 07:10:28 -- accel/accel.sh@20 -- # IFS=: 00:09:55.228 07:10:28 -- accel/accel.sh@20 -- # read -r var val 00:09:55.228 07:10:28 -- accel/accel.sh@21 -- # val= 00:09:55.228 07:10:28 -- accel/accel.sh@22 -- # case "$var" in 00:09:55.228 07:10:28 -- accel/accel.sh@20 -- # IFS=: 00:09:55.228 07:10:28 -- accel/accel.sh@20 -- # read -r var val 00:09:55.228 07:10:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:55.228 07:10:28 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:09:55.228 07:10:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:55.228 ************************************ 00:09:55.228 END TEST accel_copy_crc32c_C2 00:09:55.228 ************************************ 00:09:55.228 00:09:55.228 real 0m4.927s 00:09:55.228 user 0m4.378s 00:09:55.228 sys 0m0.376s 00:09:55.228 07:10:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:55.229 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:09:55.229 07:10:28 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:09:55.229 07:10:28 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:09:55.229 07:10:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:55.229 07:10:28 -- common/autotest_common.sh@10 -- # set +x 00:09:55.229 ************************************ 00:09:55.229 START TEST accel_dualcast 00:09:55.229 ************************************ 00:09:55.229 07:10:28 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dualcast -y 00:09:55.229 07:10:28 -- accel/accel.sh@16 -- # local accel_opc 00:09:55.229 07:10:28 -- accel/accel.sh@17 -- # local accel_module 00:09:55.229 07:10:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:09:55.229 07:10:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:55.229 07:10:28 -- accel/accel.sh@12 -- # build_accel_config 00:09:55.229 07:10:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:55.229 07:10:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:55.229 07:10:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:55.229 07:10:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:55.229 07:10:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:55.229 07:10:28 -- accel/accel.sh@41 -- # local IFS=, 00:09:55.229 07:10:28 -- accel/accel.sh@42 -- # jq -r . 00:09:55.229 [2024-02-13 07:10:28.741025] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:55.229 [2024-02-13 07:10:28.742088] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110588 ] 00:09:55.229 [2024-02-13 07:10:28.904215] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.488 [2024-02-13 07:10:29.087109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.488 [2024-02-13 07:10:29.087524] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:56.865 [2024-02-13 07:10:30.287996] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:57.450 07:10:31 -- accel/accel.sh@18 -- # out=' 00:09:57.450 SPDK Configuration: 00:09:57.450 Core mask: 0x1 00:09:57.450 00:09:57.450 Accel Perf Configuration: 00:09:57.450 Workload Type: dualcast 00:09:57.450 Transfer size: 4096 bytes 00:09:57.450 Vector count 1 00:09:57.450 Module: software 00:09:57.450 Queue depth: 32 00:09:57.450 Allocate depth: 32 00:09:57.450 # threads/core: 1 00:09:57.450 Run time: 1 seconds 00:09:57.450 Verify: Yes 00:09:57.450 00:09:57.450 Running for 1 seconds... 00:09:57.450 00:09:57.450 Core,Thread Transfers Bandwidth Failed Miscompares 00:09:57.450 ------------------------------------------------------------------------------------ 00:09:57.450 0,0 330112/s 1289 MiB/s 0 0 00:09:57.450 ==================================================================================== 00:09:57.450 Total 330112/s 1289 MiB/s 0 0' 00:09:57.450 07:10:31 -- accel/accel.sh@20 -- # IFS=: 00:09:57.450 07:10:31 -- accel/accel.sh@20 -- # read -r var val 00:09:57.450 07:10:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:09:57.451 07:10:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:09:57.451 07:10:31 -- accel/accel.sh@12 -- # build_accel_config 00:09:57.451 07:10:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:57.451 07:10:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:57.451 07:10:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:57.451 07:10:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:57.451 07:10:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:57.451 07:10:31 -- accel/accel.sh@41 -- # local IFS=, 00:09:57.451 07:10:31 -- accel/accel.sh@42 -- # jq -r . 00:09:57.451 [2024-02-13 07:10:31.069454] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:57.451 [2024-02-13 07:10:31.069972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110629 ] 00:09:57.709 [2024-02-13 07:10:31.237279] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.968 [2024-02-13 07:10:31.440673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.968 [2024-02-13 07:10:31.440839] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:09:57.968 07:10:31 -- accel/accel.sh@21 -- # val= 00:09:57.968 07:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # IFS=: 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # read -r var val 00:09:57.968 07:10:31 -- accel/accel.sh@21 -- # val= 00:09:57.968 07:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # IFS=: 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # read -r var val 00:09:57.968 07:10:31 -- accel/accel.sh@21 -- # val=0x1 00:09:57.968 07:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # IFS=: 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # read -r var val 00:09:57.968 07:10:31 -- accel/accel.sh@21 -- # val= 00:09:57.968 07:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # IFS=: 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # read -r var val 00:09:57.968 07:10:31 -- accel/accel.sh@21 -- # val= 00:09:57.968 07:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # IFS=: 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # read -r var val 00:09:57.968 07:10:31 -- accel/accel.sh@21 -- # val=dualcast 00:09:57.968 07:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.968 07:10:31 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # IFS=: 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # read -r var val 00:09:57.968 07:10:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:09:57.968 07:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # IFS=: 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # read -r var val 00:09:57.968 07:10:31 -- accel/accel.sh@21 -- # val= 00:09:57.968 07:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # IFS=: 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # read -r var val 00:09:57.968 07:10:31 -- accel/accel.sh@21 -- # val=software 00:09:57.968 07:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.968 07:10:31 -- accel/accel.sh@23 -- # accel_module=software 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # IFS=: 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # read -r var val 00:09:57.968 07:10:31 -- accel/accel.sh@21 -- # val=32 00:09:57.968 07:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # IFS=: 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # read -r var val 00:09:57.968 07:10:31 -- accel/accel.sh@21 -- # val=32 00:09:57.968 07:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # IFS=: 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # read -r var val 00:09:57.968 07:10:31 -- accel/accel.sh@21 -- # val=1 00:09:57.968 07:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # IFS=: 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # read -r var val 00:09:57.968 07:10:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:09:57.968 07:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # IFS=: 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # read -r var val 00:09:57.968 07:10:31 -- accel/accel.sh@21 -- # val=Yes 00:09:57.968 07:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # IFS=: 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # read -r var val 00:09:57.968 07:10:31 -- accel/accel.sh@21 -- # val= 00:09:57.968 07:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # IFS=: 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # read -r var val 00:09:57.968 07:10:31 -- accel/accel.sh@21 -- # val= 00:09:57.968 07:10:31 -- accel/accel.sh@22 -- # case "$var" in 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # IFS=: 00:09:57.968 07:10:31 -- accel/accel.sh@20 -- # read -r var val 00:09:59.345 [2024-02-13 07:10:32.634808] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:09:59.913 07:10:33 -- accel/accel.sh@21 -- # val= 00:09:59.913 07:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.913 07:10:33 -- accel/accel.sh@20 -- # IFS=: 00:09:59.913 07:10:33 -- accel/accel.sh@20 -- # read -r var val 00:09:59.913 07:10:33 -- accel/accel.sh@21 -- # val= 00:09:59.913 07:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.913 07:10:33 -- accel/accel.sh@20 -- # IFS=: 00:09:59.913 07:10:33 -- accel/accel.sh@20 -- # read -r var val 00:09:59.913 07:10:33 -- accel/accel.sh@21 -- # val= 00:09:59.913 07:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.913 07:10:33 -- accel/accel.sh@20 -- # IFS=: 00:09:59.913 07:10:33 -- accel/accel.sh@20 -- # read -r var val 00:09:59.913 07:10:33 -- accel/accel.sh@21 -- # val= 00:09:59.913 07:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.913 07:10:33 -- accel/accel.sh@20 -- # IFS=: 00:09:59.913 07:10:33 -- accel/accel.sh@20 -- # read -r var val 00:09:59.913 07:10:33 -- accel/accel.sh@21 -- # val= 00:09:59.913 07:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.913 07:10:33 -- accel/accel.sh@20 -- # IFS=: 00:09:59.913 07:10:33 -- accel/accel.sh@20 -- # read -r var val 00:09:59.913 07:10:33 -- accel/accel.sh@21 -- # val= 00:09:59.913 07:10:33 -- accel/accel.sh@22 -- # case "$var" in 00:09:59.913 07:10:33 -- accel/accel.sh@20 -- # IFS=: 00:09:59.913 07:10:33 -- accel/accel.sh@20 -- # read -r var val 00:09:59.913 ************************************ 00:09:59.913 END TEST accel_dualcast 00:09:59.913 ************************************ 00:09:59.913 07:10:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:09:59.913 07:10:33 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:09:59.913 07:10:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:59.913 00:09:59.913 real 0m4.669s 00:09:59.913 user 0m4.143s 00:09:59.913 sys 0m0.362s 00:09:59.913 07:10:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:59.913 07:10:33 -- common/autotest_common.sh@10 -- # set +x 00:09:59.913 07:10:33 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:09:59.913 07:10:33 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:09:59.913 07:10:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:09:59.913 07:10:33 -- common/autotest_common.sh@10 -- # set +x 00:09:59.913 ************************************ 00:09:59.913 START TEST accel_compare 00:09:59.913 ************************************ 00:09:59.913 07:10:33 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w compare -y 00:09:59.913 07:10:33 -- accel/accel.sh@16 -- # local accel_opc 00:09:59.913 07:10:33 -- accel/accel.sh@17 -- # local accel_module 00:09:59.913 07:10:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:09:59.913 07:10:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:09:59.913 07:10:33 -- accel/accel.sh@12 -- # build_accel_config 00:09:59.913 07:10:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:09:59.913 07:10:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:09:59.913 07:10:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:59.913 07:10:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:09:59.913 07:10:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:09:59.913 07:10:33 -- accel/accel.sh@41 -- # local IFS=, 00:09:59.913 07:10:33 -- accel/accel.sh@42 -- # jq -r . 00:09:59.913 [2024-02-13 07:10:33.465030] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:09:59.913 [2024-02-13 07:10:33.465402] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110697 ] 00:10:00.171 [2024-02-13 07:10:33.636046] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.171 [2024-02-13 07:10:33.834892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.171 [2024-02-13 07:10:33.835282] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:01.549 [2024-02-13 07:10:35.067050] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:02.486 07:10:35 -- accel/accel.sh@18 -- # out=' 00:10:02.486 SPDK Configuration: 00:10:02.486 Core mask: 0x1 00:10:02.486 00:10:02.486 Accel Perf Configuration: 00:10:02.486 Workload Type: compare 00:10:02.486 Transfer size: 4096 bytes 00:10:02.486 Vector count 1 00:10:02.486 Module: software 00:10:02.486 Queue depth: 32 00:10:02.486 Allocate depth: 32 00:10:02.486 # threads/core: 1 00:10:02.486 Run time: 1 seconds 00:10:02.486 Verify: Yes 00:10:02.486 00:10:02.486 Running for 1 seconds... 00:10:02.486 00:10:02.486 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:02.486 ------------------------------------------------------------------------------------ 00:10:02.486 0,0 448096/s 1750 MiB/s 0 0 00:10:02.486 ==================================================================================== 00:10:02.486 Total 448096/s 1750 MiB/s 0 0' 00:10:02.486 07:10:35 -- accel/accel.sh@20 -- # IFS=: 00:10:02.486 07:10:35 -- accel/accel.sh@20 -- # read -r var val 00:10:02.486 07:10:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:10:02.486 07:10:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:10:02.486 07:10:35 -- accel/accel.sh@12 -- # build_accel_config 00:10:02.486 07:10:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:02.486 07:10:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:02.486 07:10:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:02.486 07:10:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:02.486 07:10:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:02.486 07:10:35 -- accel/accel.sh@41 -- # local IFS=, 00:10:02.486 07:10:35 -- accel/accel.sh@42 -- # jq -r . 00:10:02.486 [2024-02-13 07:10:35.857297] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:02.486 [2024-02-13 07:10:35.857659] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110736 ] 00:10:02.486 [2024-02-13 07:10:36.013720] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.747 [2024-02-13 07:10:36.204212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.747 [2024-02-13 07:10:36.204659] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:02.747 07:10:36 -- accel/accel.sh@21 -- # val= 00:10:02.747 07:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # IFS=: 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # read -r var val 00:10:02.747 07:10:36 -- accel/accel.sh@21 -- # val= 00:10:02.747 07:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # IFS=: 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # read -r var val 00:10:02.747 07:10:36 -- accel/accel.sh@21 -- # val=0x1 00:10:02.747 07:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # IFS=: 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # read -r var val 00:10:02.747 07:10:36 -- accel/accel.sh@21 -- # val= 00:10:02.747 07:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # IFS=: 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # read -r var val 00:10:02.747 07:10:36 -- accel/accel.sh@21 -- # val= 00:10:02.747 07:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # IFS=: 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # read -r var val 00:10:02.747 07:10:36 -- accel/accel.sh@21 -- # val=compare 00:10:02.747 07:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.747 07:10:36 -- accel/accel.sh@24 -- # accel_opc=compare 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # IFS=: 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # read -r var val 00:10:02.747 07:10:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:02.747 07:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # IFS=: 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # read -r var val 00:10:02.747 07:10:36 -- accel/accel.sh@21 -- # val= 00:10:02.747 07:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # IFS=: 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # read -r var val 00:10:02.747 07:10:36 -- accel/accel.sh@21 -- # val=software 00:10:02.747 07:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.747 07:10:36 -- accel/accel.sh@23 -- # accel_module=software 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # IFS=: 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # read -r var val 00:10:02.747 07:10:36 -- accel/accel.sh@21 -- # val=32 00:10:02.747 07:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # IFS=: 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # read -r var val 00:10:02.747 07:10:36 -- accel/accel.sh@21 -- # val=32 00:10:02.747 07:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # IFS=: 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # read -r var val 00:10:02.747 07:10:36 -- accel/accel.sh@21 -- # val=1 00:10:02.747 07:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # IFS=: 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # read -r var val 00:10:02.747 07:10:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:02.747 07:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # IFS=: 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # read -r var val 00:10:02.747 07:10:36 -- accel/accel.sh@21 -- # val=Yes 00:10:02.747 07:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # IFS=: 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # read -r var val 00:10:02.747 07:10:36 -- accel/accel.sh@21 -- # val= 00:10:02.747 07:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # IFS=: 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # read -r var val 00:10:02.747 07:10:36 -- accel/accel.sh@21 -- # val= 00:10:02.747 07:10:36 -- accel/accel.sh@22 -- # case "$var" in 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # IFS=: 00:10:02.747 07:10:36 -- accel/accel.sh@20 -- # read -r var val 00:10:04.125 [2024-02-13 07:10:37.396450] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:04.693 07:10:38 -- accel/accel.sh@21 -- # val= 00:10:04.693 07:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.693 07:10:38 -- accel/accel.sh@20 -- # IFS=: 00:10:04.693 07:10:38 -- accel/accel.sh@20 -- # read -r var val 00:10:04.693 07:10:38 -- accel/accel.sh@21 -- # val= 00:10:04.693 07:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.693 07:10:38 -- accel/accel.sh@20 -- # IFS=: 00:10:04.693 07:10:38 -- accel/accel.sh@20 -- # read -r var val 00:10:04.693 07:10:38 -- accel/accel.sh@21 -- # val= 00:10:04.693 07:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.693 07:10:38 -- accel/accel.sh@20 -- # IFS=: 00:10:04.693 07:10:38 -- accel/accel.sh@20 -- # read -r var val 00:10:04.693 07:10:38 -- accel/accel.sh@21 -- # val= 00:10:04.693 07:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.693 07:10:38 -- accel/accel.sh@20 -- # IFS=: 00:10:04.693 07:10:38 -- accel/accel.sh@20 -- # read -r var val 00:10:04.693 07:10:38 -- accel/accel.sh@21 -- # val= 00:10:04.693 07:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.693 07:10:38 -- accel/accel.sh@20 -- # IFS=: 00:10:04.693 07:10:38 -- accel/accel.sh@20 -- # read -r var val 00:10:04.693 07:10:38 -- accel/accel.sh@21 -- # val= 00:10:04.693 07:10:38 -- accel/accel.sh@22 -- # case "$var" in 00:10:04.693 07:10:38 -- accel/accel.sh@20 -- # IFS=: 00:10:04.693 07:10:38 -- accel/accel.sh@20 -- # read -r var val 00:10:04.693 ************************************ 00:10:04.693 END TEST accel_compare 00:10:04.693 ************************************ 00:10:04.693 07:10:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:04.693 07:10:38 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:10:04.693 07:10:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:04.693 00:10:04.693 real 0m4.718s 00:10:04.693 user 0m4.239s 00:10:04.693 sys 0m0.346s 00:10:04.693 07:10:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:04.693 07:10:38 -- common/autotest_common.sh@10 -- # set +x 00:10:04.693 07:10:38 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:10:04.693 07:10:38 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:10:04.693 07:10:38 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:04.693 07:10:38 -- common/autotest_common.sh@10 -- # set +x 00:10:04.693 ************************************ 00:10:04.693 START TEST accel_xor 00:10:04.693 ************************************ 00:10:04.693 07:10:38 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w xor -y 00:10:04.693 07:10:38 -- accel/accel.sh@16 -- # local accel_opc 00:10:04.694 07:10:38 -- accel/accel.sh@17 -- # local accel_module 00:10:04.694 07:10:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:10:04.694 07:10:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:04.694 07:10:38 -- accel/accel.sh@12 -- # build_accel_config 00:10:04.694 07:10:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:04.694 07:10:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:04.694 07:10:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:04.694 07:10:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:04.694 07:10:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:04.694 07:10:38 -- accel/accel.sh@41 -- # local IFS=, 00:10:04.694 07:10:38 -- accel/accel.sh@42 -- # jq -r . 00:10:04.694 [2024-02-13 07:10:38.237950] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:04.694 [2024-02-13 07:10:38.239208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110790 ] 00:10:04.953 [2024-02-13 07:10:38.409921] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.953 [2024-02-13 07:10:38.588114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.953 [2024-02-13 07:10:38.588538] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:06.330 [2024-02-13 07:10:39.778325] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:06.898 07:10:40 -- accel/accel.sh@18 -- # out=' 00:10:06.898 SPDK Configuration: 00:10:06.898 Core mask: 0x1 00:10:06.898 00:10:06.898 Accel Perf Configuration: 00:10:06.898 Workload Type: xor 00:10:06.898 Source buffers: 2 00:10:06.898 Transfer size: 4096 bytes 00:10:06.898 Vector count 1 00:10:06.898 Module: software 00:10:06.898 Queue depth: 32 00:10:06.898 Allocate depth: 32 00:10:06.898 # threads/core: 1 00:10:06.898 Run time: 1 seconds 00:10:06.898 Verify: Yes 00:10:06.898 00:10:06.898 Running for 1 seconds... 00:10:06.898 00:10:06.898 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:06.898 ------------------------------------------------------------------------------------ 00:10:06.898 0,0 253280/s 989 MiB/s 0 0 00:10:06.898 ==================================================================================== 00:10:06.898 Total 253280/s 989 MiB/s 0 0' 00:10:06.898 07:10:40 -- accel/accel.sh@20 -- # IFS=: 00:10:06.898 07:10:40 -- accel/accel.sh@20 -- # read -r var val 00:10:06.898 07:10:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:10:06.898 07:10:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:10:06.898 07:10:40 -- accel/accel.sh@12 -- # build_accel_config 00:10:06.898 07:10:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:06.898 07:10:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:06.898 07:10:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:06.898 07:10:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:06.898 07:10:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:06.898 07:10:40 -- accel/accel.sh@41 -- # local IFS=, 00:10:06.898 07:10:40 -- accel/accel.sh@42 -- # jq -r . 00:10:06.898 [2024-02-13 07:10:40.576485] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:06.898 [2024-02-13 07:10:40.576945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110824 ] 00:10:07.157 [2024-02-13 07:10:40.741604] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.416 [2024-02-13 07:10:40.922774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.416 [2024-02-13 07:10:40.923205] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:07.676 07:10:41 -- accel/accel.sh@21 -- # val= 00:10:07.676 07:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # IFS=: 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # read -r var val 00:10:07.676 07:10:41 -- accel/accel.sh@21 -- # val= 00:10:07.676 07:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # IFS=: 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # read -r var val 00:10:07.676 07:10:41 -- accel/accel.sh@21 -- # val=0x1 00:10:07.676 07:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # IFS=: 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # read -r var val 00:10:07.676 07:10:41 -- accel/accel.sh@21 -- # val= 00:10:07.676 07:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # IFS=: 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # read -r var val 00:10:07.676 07:10:41 -- accel/accel.sh@21 -- # val= 00:10:07.676 07:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # IFS=: 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # read -r var val 00:10:07.676 07:10:41 -- accel/accel.sh@21 -- # val=xor 00:10:07.676 07:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.676 07:10:41 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # IFS=: 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # read -r var val 00:10:07.676 07:10:41 -- accel/accel.sh@21 -- # val=2 00:10:07.676 07:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # IFS=: 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # read -r var val 00:10:07.676 07:10:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:07.676 07:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # IFS=: 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # read -r var val 00:10:07.676 07:10:41 -- accel/accel.sh@21 -- # val= 00:10:07.676 07:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # IFS=: 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # read -r var val 00:10:07.676 07:10:41 -- accel/accel.sh@21 -- # val=software 00:10:07.676 07:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.676 07:10:41 -- accel/accel.sh@23 -- # accel_module=software 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # IFS=: 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # read -r var val 00:10:07.676 07:10:41 -- accel/accel.sh@21 -- # val=32 00:10:07.676 07:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # IFS=: 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # read -r var val 00:10:07.676 07:10:41 -- accel/accel.sh@21 -- # val=32 00:10:07.676 07:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # IFS=: 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # read -r var val 00:10:07.676 07:10:41 -- accel/accel.sh@21 -- # val=1 00:10:07.676 07:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # IFS=: 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # read -r var val 00:10:07.676 07:10:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:07.676 07:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # IFS=: 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # read -r var val 00:10:07.676 07:10:41 -- accel/accel.sh@21 -- # val=Yes 00:10:07.676 07:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # IFS=: 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # read -r var val 00:10:07.676 07:10:41 -- accel/accel.sh@21 -- # val= 00:10:07.676 07:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # IFS=: 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # read -r var val 00:10:07.676 07:10:41 -- accel/accel.sh@21 -- # val= 00:10:07.676 07:10:41 -- accel/accel.sh@22 -- # case "$var" in 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # IFS=: 00:10:07.676 07:10:41 -- accel/accel.sh@20 -- # read -r var val 00:10:08.613 [2024-02-13 07:10:42.114167] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:09.182 07:10:42 -- accel/accel.sh@21 -- # val= 00:10:09.182 07:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.182 07:10:42 -- accel/accel.sh@20 -- # IFS=: 00:10:09.182 07:10:42 -- accel/accel.sh@20 -- # read -r var val 00:10:09.182 07:10:42 -- accel/accel.sh@21 -- # val= 00:10:09.182 07:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.182 07:10:42 -- accel/accel.sh@20 -- # IFS=: 00:10:09.182 07:10:42 -- accel/accel.sh@20 -- # read -r var val 00:10:09.182 07:10:42 -- accel/accel.sh@21 -- # val= 00:10:09.182 07:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.182 07:10:42 -- accel/accel.sh@20 -- # IFS=: 00:10:09.182 07:10:42 -- accel/accel.sh@20 -- # read -r var val 00:10:09.182 07:10:42 -- accel/accel.sh@21 -- # val= 00:10:09.182 07:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.182 07:10:42 -- accel/accel.sh@20 -- # IFS=: 00:10:09.182 07:10:42 -- accel/accel.sh@20 -- # read -r var val 00:10:09.182 07:10:42 -- accel/accel.sh@21 -- # val= 00:10:09.182 07:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.182 07:10:42 -- accel/accel.sh@20 -- # IFS=: 00:10:09.182 07:10:42 -- accel/accel.sh@20 -- # read -r var val 00:10:09.182 07:10:42 -- accel/accel.sh@21 -- # val= 00:10:09.182 07:10:42 -- accel/accel.sh@22 -- # case "$var" in 00:10:09.182 07:10:42 -- accel/accel.sh@20 -- # IFS=: 00:10:09.182 07:10:42 -- accel/accel.sh@20 -- # read -r var val 00:10:09.182 ************************************ 00:10:09.182 END TEST accel_xor 00:10:09.182 ************************************ 00:10:09.182 07:10:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:09.182 07:10:42 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:09.182 07:10:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:09.182 00:10:09.182 real 0m4.667s 00:10:09.182 user 0m4.170s 00:10:09.182 sys 0m0.361s 00:10:09.182 07:10:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:09.182 07:10:42 -- common/autotest_common.sh@10 -- # set +x 00:10:09.441 07:10:42 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:10:09.441 07:10:42 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:10:09.441 07:10:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:09.441 07:10:42 -- common/autotest_common.sh@10 -- # set +x 00:10:09.441 ************************************ 00:10:09.441 START TEST accel_xor 00:10:09.441 ************************************ 00:10:09.441 07:10:42 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w xor -y -x 3 00:10:09.441 07:10:42 -- accel/accel.sh@16 -- # local accel_opc 00:10:09.441 07:10:42 -- accel/accel.sh@17 -- # local accel_module 00:10:09.441 07:10:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:10:09.441 07:10:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:09.441 07:10:42 -- accel/accel.sh@12 -- # build_accel_config 00:10:09.441 07:10:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:09.441 07:10:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:09.441 07:10:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:09.441 07:10:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:09.441 07:10:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:09.441 07:10:42 -- accel/accel.sh@41 -- # local IFS=, 00:10:09.441 07:10:42 -- accel/accel.sh@42 -- # jq -r . 00:10:09.441 [2024-02-13 07:10:42.961773] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:09.441 [2024-02-13 07:10:42.962140] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110889 ] 00:10:09.441 [2024-02-13 07:10:43.125856] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.700 [2024-02-13 07:10:43.304221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.700 [2024-02-13 07:10:43.304600] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:11.078 [2024-02-13 07:10:44.503082] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:11.644 07:10:45 -- accel/accel.sh@18 -- # out=' 00:10:11.644 SPDK Configuration: 00:10:11.644 Core mask: 0x1 00:10:11.644 00:10:11.644 Accel Perf Configuration: 00:10:11.644 Workload Type: xor 00:10:11.644 Source buffers: 3 00:10:11.644 Transfer size: 4096 bytes 00:10:11.644 Vector count 1 00:10:11.644 Module: software 00:10:11.644 Queue depth: 32 00:10:11.644 Allocate depth: 32 00:10:11.644 # threads/core: 1 00:10:11.644 Run time: 1 seconds 00:10:11.644 Verify: Yes 00:10:11.644 00:10:11.644 Running for 1 seconds... 00:10:11.644 00:10:11.644 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:11.644 ------------------------------------------------------------------------------------ 00:10:11.644 0,0 237504/s 927 MiB/s 0 0 00:10:11.644 ==================================================================================== 00:10:11.644 Total 237504/s 927 MiB/s 0 0' 00:10:11.644 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:11.644 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:11.644 07:10:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:10:11.644 07:10:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:10:11.644 07:10:45 -- accel/accel.sh@12 -- # build_accel_config 00:10:11.644 07:10:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:11.644 07:10:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:11.644 07:10:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:11.644 07:10:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:11.644 07:10:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:11.644 07:10:45 -- accel/accel.sh@41 -- # local IFS=, 00:10:11.644 07:10:45 -- accel/accel.sh@42 -- # jq -r . 00:10:11.903 [2024-02-13 07:10:45.351278] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:11.903 [2024-02-13 07:10:45.351707] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110928 ] 00:10:11.903 [2024-02-13 07:10:45.519287] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.162 [2024-02-13 07:10:45.776174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.162 [2024-02-13 07:10:45.776598] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:12.420 07:10:45 -- accel/accel.sh@21 -- # val= 00:10:12.421 07:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.421 07:10:45 -- accel/accel.sh@21 -- # val= 00:10:12.421 07:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.421 07:10:45 -- accel/accel.sh@21 -- # val=0x1 00:10:12.421 07:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.421 07:10:45 -- accel/accel.sh@21 -- # val= 00:10:12.421 07:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.421 07:10:45 -- accel/accel.sh@21 -- # val= 00:10:12.421 07:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.421 07:10:45 -- accel/accel.sh@21 -- # val=xor 00:10:12.421 07:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.421 07:10:45 -- accel/accel.sh@24 -- # accel_opc=xor 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.421 07:10:45 -- accel/accel.sh@21 -- # val=3 00:10:12.421 07:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.421 07:10:45 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:12.421 07:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.421 07:10:45 -- accel/accel.sh@21 -- # val= 00:10:12.421 07:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.421 07:10:45 -- accel/accel.sh@21 -- # val=software 00:10:12.421 07:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.421 07:10:45 -- accel/accel.sh@23 -- # accel_module=software 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.421 07:10:45 -- accel/accel.sh@21 -- # val=32 00:10:12.421 07:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.421 07:10:45 -- accel/accel.sh@21 -- # val=32 00:10:12.421 07:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.421 07:10:45 -- accel/accel.sh@21 -- # val=1 00:10:12.421 07:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.421 07:10:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:12.421 07:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.421 07:10:45 -- accel/accel.sh@21 -- # val=Yes 00:10:12.421 07:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.421 07:10:45 -- accel/accel.sh@21 -- # val= 00:10:12.421 07:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:12.421 07:10:45 -- accel/accel.sh@21 -- # val= 00:10:12.421 07:10:45 -- accel/accel.sh@22 -- # case "$var" in 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # IFS=: 00:10:12.421 07:10:45 -- accel/accel.sh@20 -- # read -r var val 00:10:13.356 [2024-02-13 07:10:46.985210] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:14.292 07:10:47 -- accel/accel.sh@21 -- # val= 00:10:14.292 07:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.292 07:10:47 -- accel/accel.sh@20 -- # IFS=: 00:10:14.292 07:10:47 -- accel/accel.sh@20 -- # read -r var val 00:10:14.292 07:10:47 -- accel/accel.sh@21 -- # val= 00:10:14.292 07:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.292 07:10:47 -- accel/accel.sh@20 -- # IFS=: 00:10:14.292 07:10:47 -- accel/accel.sh@20 -- # read -r var val 00:10:14.292 07:10:47 -- accel/accel.sh@21 -- # val= 00:10:14.292 07:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.292 07:10:47 -- accel/accel.sh@20 -- # IFS=: 00:10:14.292 07:10:47 -- accel/accel.sh@20 -- # read -r var val 00:10:14.292 07:10:47 -- accel/accel.sh@21 -- # val= 00:10:14.292 07:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.292 07:10:47 -- accel/accel.sh@20 -- # IFS=: 00:10:14.292 07:10:47 -- accel/accel.sh@20 -- # read -r var val 00:10:14.292 07:10:47 -- accel/accel.sh@21 -- # val= 00:10:14.292 07:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.292 07:10:47 -- accel/accel.sh@20 -- # IFS=: 00:10:14.292 07:10:47 -- accel/accel.sh@20 -- # read -r var val 00:10:14.292 07:10:47 -- accel/accel.sh@21 -- # val= 00:10:14.292 07:10:47 -- accel/accel.sh@22 -- # case "$var" in 00:10:14.292 07:10:47 -- accel/accel.sh@20 -- # IFS=: 00:10:14.292 07:10:47 -- accel/accel.sh@20 -- # read -r var val 00:10:14.292 ************************************ 00:10:14.292 END TEST accel_xor 00:10:14.292 ************************************ 00:10:14.292 07:10:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:14.292 07:10:47 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:10:14.292 07:10:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:14.292 00:10:14.292 real 0m4.847s 00:10:14.292 user 0m4.344s 00:10:14.292 sys 0m0.354s 00:10:14.292 07:10:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:14.292 07:10:47 -- common/autotest_common.sh@10 -- # set +x 00:10:14.292 07:10:47 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:10:14.292 07:10:47 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:10:14.292 07:10:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:14.292 07:10:47 -- common/autotest_common.sh@10 -- # set +x 00:10:14.292 ************************************ 00:10:14.292 START TEST accel_dif_verify 00:10:14.292 ************************************ 00:10:14.292 07:10:47 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_verify 00:10:14.292 07:10:47 -- accel/accel.sh@16 -- # local accel_opc 00:10:14.292 07:10:47 -- accel/accel.sh@17 -- # local accel_module 00:10:14.292 07:10:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:10:14.292 07:10:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:14.292 07:10:47 -- accel/accel.sh@12 -- # build_accel_config 00:10:14.292 07:10:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:14.292 07:10:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:14.292 07:10:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:14.292 07:10:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:14.292 07:10:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:14.292 07:10:47 -- accel/accel.sh@41 -- # local IFS=, 00:10:14.292 07:10:47 -- accel/accel.sh@42 -- # jq -r . 00:10:14.292 [2024-02-13 07:10:47.869879] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:14.292 [2024-02-13 07:10:47.870373] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110980 ] 00:10:14.551 [2024-02-13 07:10:48.038355] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.551 [2024-02-13 07:10:48.225593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.551 [2024-02-13 07:10:48.226074] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:15.980 [2024-02-13 07:10:49.424547] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:16.546 07:10:50 -- accel/accel.sh@18 -- # out=' 00:10:16.546 SPDK Configuration: 00:10:16.546 Core mask: 0x1 00:10:16.546 00:10:16.546 Accel Perf Configuration: 00:10:16.546 Workload Type: dif_verify 00:10:16.546 Vector size: 4096 bytes 00:10:16.546 Transfer size: 4096 bytes 00:10:16.546 Block size: 512 bytes 00:10:16.546 Metadata size: 8 bytes 00:10:16.546 Vector count 1 00:10:16.546 Module: software 00:10:16.546 Queue depth: 32 00:10:16.546 Allocate depth: 32 00:10:16.546 # threads/core: 1 00:10:16.546 Run time: 1 seconds 00:10:16.546 Verify: No 00:10:16.546 00:10:16.546 Running for 1 seconds... 00:10:16.546 00:10:16.546 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:16.546 ------------------------------------------------------------------------------------ 00:10:16.546 0,0 106784/s 423 MiB/s 0 0 00:10:16.546 ==================================================================================== 00:10:16.546 Total 106784/s 417 MiB/s 0 0' 00:10:16.546 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:16.546 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:16.546 07:10:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:10:16.546 07:10:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:10:16.546 07:10:50 -- accel/accel.sh@12 -- # build_accel_config 00:10:16.546 07:10:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:16.546 07:10:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:16.804 07:10:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:16.804 07:10:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:16.804 07:10:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:16.804 07:10:50 -- accel/accel.sh@41 -- # local IFS=, 00:10:16.804 07:10:50 -- accel/accel.sh@42 -- # jq -r . 00:10:16.804 [2024-02-13 07:10:50.280391] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:16.804 [2024-02-13 07:10:50.280787] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111014 ] 00:10:16.804 [2024-02-13 07:10:50.448631] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.061 [2024-02-13 07:10:50.643353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.061 [2024-02-13 07:10:50.643788] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val= 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val= 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val=0x1 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val= 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val= 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val=dif_verify 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val= 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val=software 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@23 -- # accel_module=software 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val=32 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val=32 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val=1 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val=No 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val= 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:17.320 07:10:50 -- accel/accel.sh@21 -- # val= 00:10:17.320 07:10:50 -- accel/accel.sh@22 -- # case "$var" in 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # IFS=: 00:10:17.320 07:10:50 -- accel/accel.sh@20 -- # read -r var val 00:10:18.255 [2024-02-13 07:10:51.853763] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:19.189 07:10:52 -- accel/accel.sh@21 -- # val= 00:10:19.189 07:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.189 07:10:52 -- accel/accel.sh@20 -- # IFS=: 00:10:19.189 07:10:52 -- accel/accel.sh@20 -- # read -r var val 00:10:19.189 07:10:52 -- accel/accel.sh@21 -- # val= 00:10:19.189 07:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.189 07:10:52 -- accel/accel.sh@20 -- # IFS=: 00:10:19.189 07:10:52 -- accel/accel.sh@20 -- # read -r var val 00:10:19.189 07:10:52 -- accel/accel.sh@21 -- # val= 00:10:19.189 07:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.189 07:10:52 -- accel/accel.sh@20 -- # IFS=: 00:10:19.189 07:10:52 -- accel/accel.sh@20 -- # read -r var val 00:10:19.189 07:10:52 -- accel/accel.sh@21 -- # val= 00:10:19.189 07:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.189 07:10:52 -- accel/accel.sh@20 -- # IFS=: 00:10:19.189 07:10:52 -- accel/accel.sh@20 -- # read -r var val 00:10:19.189 07:10:52 -- accel/accel.sh@21 -- # val= 00:10:19.189 07:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.189 07:10:52 -- accel/accel.sh@20 -- # IFS=: 00:10:19.189 07:10:52 -- accel/accel.sh@20 -- # read -r var val 00:10:19.189 07:10:52 -- accel/accel.sh@21 -- # val= 00:10:19.189 07:10:52 -- accel/accel.sh@22 -- # case "$var" in 00:10:19.189 07:10:52 -- accel/accel.sh@20 -- # IFS=: 00:10:19.189 07:10:52 -- accel/accel.sh@20 -- # read -r var val 00:10:19.189 07:10:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:19.189 07:10:52 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:10:19.189 07:10:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:19.189 00:10:19.189 real 0m4.815s 00:10:19.189 user 0m4.321s 00:10:19.189 sys 0m0.346s 00:10:19.189 07:10:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:19.189 ************************************ 00:10:19.189 END TEST accel_dif_verify 00:10:19.189 ************************************ 00:10:19.189 07:10:52 -- common/autotest_common.sh@10 -- # set +x 00:10:19.189 07:10:52 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:10:19.189 07:10:52 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:10:19.189 07:10:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:19.189 07:10:52 -- common/autotest_common.sh@10 -- # set +x 00:10:19.189 ************************************ 00:10:19.189 START TEST accel_dif_generate 00:10:19.189 ************************************ 00:10:19.189 07:10:52 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_generate 00:10:19.189 07:10:52 -- accel/accel.sh@16 -- # local accel_opc 00:10:19.189 07:10:52 -- accel/accel.sh@17 -- # local accel_module 00:10:19.189 07:10:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:10:19.189 07:10:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:19.189 07:10:52 -- accel/accel.sh@12 -- # build_accel_config 00:10:19.189 07:10:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:19.189 07:10:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:19.189 07:10:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:19.189 07:10:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:19.189 07:10:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:19.189 07:10:52 -- accel/accel.sh@41 -- # local IFS=, 00:10:19.189 07:10:52 -- accel/accel.sh@42 -- # jq -r . 00:10:19.189 [2024-02-13 07:10:52.743828] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:19.189 [2024-02-13 07:10:52.745083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111090 ] 00:10:19.447 [2024-02-13 07:10:52.920869] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.448 [2024-02-13 07:10:53.108257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.448 [2024-02-13 07:10:53.108627] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:20.821 [2024-02-13 07:10:54.307617] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:21.755 07:10:55 -- accel/accel.sh@18 -- # out=' 00:10:21.755 SPDK Configuration: 00:10:21.755 Core mask: 0x1 00:10:21.755 00:10:21.755 Accel Perf Configuration: 00:10:21.755 Workload Type: dif_generate 00:10:21.755 Vector size: 4096 bytes 00:10:21.755 Transfer size: 4096 bytes 00:10:21.755 Block size: 512 bytes 00:10:21.755 Metadata size: 8 bytes 00:10:21.755 Vector count 1 00:10:21.755 Module: software 00:10:21.755 Queue depth: 32 00:10:21.755 Allocate depth: 32 00:10:21.755 # threads/core: 1 00:10:21.755 Run time: 1 seconds 00:10:21.755 Verify: No 00:10:21.755 00:10:21.755 Running for 1 seconds... 00:10:21.755 00:10:21.755 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:21.755 ------------------------------------------------------------------------------------ 00:10:21.755 0,0 128704/s 510 MiB/s 0 0 00:10:21.755 ==================================================================================== 00:10:21.755 Total 128704/s 502 MiB/s 0 0' 00:10:21.755 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:21.755 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:21.755 07:10:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:10:21.755 07:10:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:10:21.755 07:10:55 -- accel/accel.sh@12 -- # build_accel_config 00:10:21.755 07:10:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:21.755 07:10:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:21.755 07:10:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:21.755 07:10:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:21.756 07:10:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:21.756 07:10:55 -- accel/accel.sh@41 -- # local IFS=, 00:10:21.756 07:10:55 -- accel/accel.sh@42 -- # jq -r . 00:10:21.756 [2024-02-13 07:10:55.152021] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:21.756 [2024-02-13 07:10:55.152478] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111124 ] 00:10:21.756 [2024-02-13 07:10:55.318202] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.014 [2024-02-13 07:10:55.523916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.014 [2024-02-13 07:10:55.524272] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val= 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val= 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val=0x1 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val= 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val= 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val=dif_generate 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val='512 bytes' 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val='8 bytes' 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val= 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val=software 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@23 -- # accel_module=software 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val=32 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val=32 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val=1 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val=No 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val= 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:22.273 07:10:55 -- accel/accel.sh@21 -- # val= 00:10:22.273 07:10:55 -- accel/accel.sh@22 -- # case "$var" in 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # IFS=: 00:10:22.273 07:10:55 -- accel/accel.sh@20 -- # read -r var val 00:10:23.209 [2024-02-13 07:10:56.718758] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:24.157 07:10:57 -- accel/accel.sh@21 -- # val= 00:10:24.157 07:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.157 07:10:57 -- accel/accel.sh@20 -- # IFS=: 00:10:24.157 07:10:57 -- accel/accel.sh@20 -- # read -r var val 00:10:24.157 07:10:57 -- accel/accel.sh@21 -- # val= 00:10:24.157 07:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.157 07:10:57 -- accel/accel.sh@20 -- # IFS=: 00:10:24.157 07:10:57 -- accel/accel.sh@20 -- # read -r var val 00:10:24.157 07:10:57 -- accel/accel.sh@21 -- # val= 00:10:24.157 07:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.157 07:10:57 -- accel/accel.sh@20 -- # IFS=: 00:10:24.157 07:10:57 -- accel/accel.sh@20 -- # read -r var val 00:10:24.157 07:10:57 -- accel/accel.sh@21 -- # val= 00:10:24.157 07:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.157 07:10:57 -- accel/accel.sh@20 -- # IFS=: 00:10:24.157 07:10:57 -- accel/accel.sh@20 -- # read -r var val 00:10:24.157 07:10:57 -- accel/accel.sh@21 -- # val= 00:10:24.157 07:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.157 07:10:57 -- accel/accel.sh@20 -- # IFS=: 00:10:24.157 07:10:57 -- accel/accel.sh@20 -- # read -r var val 00:10:24.157 07:10:57 -- accel/accel.sh@21 -- # val= 00:10:24.157 07:10:57 -- accel/accel.sh@22 -- # case "$var" in 00:10:24.157 07:10:57 -- accel/accel.sh@20 -- # IFS=: 00:10:24.157 07:10:57 -- accel/accel.sh@20 -- # read -r var val 00:10:24.157 ************************************ 00:10:24.157 END TEST accel_dif_generate 00:10:24.157 ************************************ 00:10:24.157 07:10:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:24.157 07:10:57 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:10:24.157 07:10:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:24.157 00:10:24.157 real 0m4.802s 00:10:24.157 user 0m4.313s 00:10:24.157 sys 0m0.346s 00:10:24.157 07:10:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:24.157 07:10:57 -- common/autotest_common.sh@10 -- # set +x 00:10:24.157 07:10:57 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:10:24.157 07:10:57 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:10:24.157 07:10:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:24.157 07:10:57 -- common/autotest_common.sh@10 -- # set +x 00:10:24.157 ************************************ 00:10:24.157 START TEST accel_dif_generate_copy 00:10:24.157 ************************************ 00:10:24.157 07:10:57 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w dif_generate_copy 00:10:24.157 07:10:57 -- accel/accel.sh@16 -- # local accel_opc 00:10:24.157 07:10:57 -- accel/accel.sh@17 -- # local accel_module 00:10:24.157 07:10:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:10:24.157 07:10:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:24.157 07:10:57 -- accel/accel.sh@12 -- # build_accel_config 00:10:24.157 07:10:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:24.157 07:10:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:24.157 07:10:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:24.157 07:10:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:24.157 07:10:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:24.157 07:10:57 -- accel/accel.sh@41 -- # local IFS=, 00:10:24.157 07:10:57 -- accel/accel.sh@42 -- # jq -r . 00:10:24.157 [2024-02-13 07:10:57.599035] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:24.157 [2024-02-13 07:10:57.599379] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111178 ] 00:10:24.157 [2024-02-13 07:10:57.760863] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.415 [2024-02-13 07:10:57.962744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.415 [2024-02-13 07:10:57.963165] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:25.792 [2024-02-13 07:10:59.164839] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:26.360 07:10:59 -- accel/accel.sh@18 -- # out=' 00:10:26.360 SPDK Configuration: 00:10:26.360 Core mask: 0x1 00:10:26.360 00:10:26.360 Accel Perf Configuration: 00:10:26.360 Workload Type: dif_generate_copy 00:10:26.360 Vector size: 4096 bytes 00:10:26.360 Transfer size: 4096 bytes 00:10:26.360 Vector count 1 00:10:26.360 Module: software 00:10:26.360 Queue depth: 32 00:10:26.360 Allocate depth: 32 00:10:26.360 # threads/core: 1 00:10:26.360 Run time: 1 seconds 00:10:26.360 Verify: No 00:10:26.360 00:10:26.360 Running for 1 seconds... 00:10:26.360 00:10:26.360 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:26.360 ------------------------------------------------------------------------------------ 00:10:26.360 0,0 99360/s 394 MiB/s 0 0 00:10:26.360 ==================================================================================== 00:10:26.360 Total 99360/s 388 MiB/s 0 0' 00:10:26.360 07:10:59 -- accel/accel.sh@20 -- # IFS=: 00:10:26.360 07:10:59 -- accel/accel.sh@20 -- # read -r var val 00:10:26.360 07:10:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:10:26.360 07:10:59 -- accel/accel.sh@12 -- # build_accel_config 00:10:26.360 07:10:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:10:26.361 07:10:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:26.361 07:10:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:26.361 07:10:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:26.361 07:10:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:26.361 07:10:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:26.361 07:10:59 -- accel/accel.sh@41 -- # local IFS=, 00:10:26.361 07:10:59 -- accel/accel.sh@42 -- # jq -r . 00:10:26.361 [2024-02-13 07:11:00.024240] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:26.361 [2024-02-13 07:11:00.024702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111217 ] 00:10:26.620 [2024-02-13 07:11:00.193990] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.878 [2024-02-13 07:11:00.395471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.878 [2024-02-13 07:11:00.395921] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:27.137 07:11:00 -- accel/accel.sh@21 -- # val= 00:10:27.137 07:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # read -r var val 00:10:27.137 07:11:00 -- accel/accel.sh@21 -- # val= 00:10:27.137 07:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # read -r var val 00:10:27.137 07:11:00 -- accel/accel.sh@21 -- # val=0x1 00:10:27.137 07:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # read -r var val 00:10:27.137 07:11:00 -- accel/accel.sh@21 -- # val= 00:10:27.137 07:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # read -r var val 00:10:27.137 07:11:00 -- accel/accel.sh@21 -- # val= 00:10:27.137 07:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # read -r var val 00:10:27.137 07:11:00 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:10:27.137 07:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.137 07:11:00 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # read -r var val 00:10:27.137 07:11:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:27.137 07:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # read -r var val 00:10:27.137 07:11:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:27.137 07:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # read -r var val 00:10:27.137 07:11:00 -- accel/accel.sh@21 -- # val= 00:10:27.137 07:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # read -r var val 00:10:27.137 07:11:00 -- accel/accel.sh@21 -- # val=software 00:10:27.137 07:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.137 07:11:00 -- accel/accel.sh@23 -- # accel_module=software 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # read -r var val 00:10:27.137 07:11:00 -- accel/accel.sh@21 -- # val=32 00:10:27.137 07:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # read -r var val 00:10:27.137 07:11:00 -- accel/accel.sh@21 -- # val=32 00:10:27.137 07:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # read -r var val 00:10:27.137 07:11:00 -- accel/accel.sh@21 -- # val=1 00:10:27.137 07:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # read -r var val 00:10:27.137 07:11:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:27.137 07:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # read -r var val 00:10:27.137 07:11:00 -- accel/accel.sh@21 -- # val=No 00:10:27.137 07:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # read -r var val 00:10:27.137 07:11:00 -- accel/accel.sh@21 -- # val= 00:10:27.137 07:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # read -r var val 00:10:27.137 07:11:00 -- accel/accel.sh@21 -- # val= 00:10:27.137 07:11:00 -- accel/accel.sh@22 -- # case "$var" in 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # IFS=: 00:10:27.137 07:11:00 -- accel/accel.sh@20 -- # read -r var val 00:10:28.073 [2024-02-13 07:11:01.608477] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:29.011 07:11:02 -- accel/accel.sh@21 -- # val= 00:10:29.011 07:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.011 07:11:02 -- accel/accel.sh@20 -- # IFS=: 00:10:29.011 07:11:02 -- accel/accel.sh@20 -- # read -r var val 00:10:29.011 07:11:02 -- accel/accel.sh@21 -- # val= 00:10:29.011 07:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.011 07:11:02 -- accel/accel.sh@20 -- # IFS=: 00:10:29.011 07:11:02 -- accel/accel.sh@20 -- # read -r var val 00:10:29.011 07:11:02 -- accel/accel.sh@21 -- # val= 00:10:29.011 07:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.011 07:11:02 -- accel/accel.sh@20 -- # IFS=: 00:10:29.011 07:11:02 -- accel/accel.sh@20 -- # read -r var val 00:10:29.011 07:11:02 -- accel/accel.sh@21 -- # val= 00:10:29.011 07:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.011 07:11:02 -- accel/accel.sh@20 -- # IFS=: 00:10:29.011 07:11:02 -- accel/accel.sh@20 -- # read -r var val 00:10:29.011 07:11:02 -- accel/accel.sh@21 -- # val= 00:10:29.011 07:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.011 07:11:02 -- accel/accel.sh@20 -- # IFS=: 00:10:29.011 07:11:02 -- accel/accel.sh@20 -- # read -r var val 00:10:29.011 07:11:02 -- accel/accel.sh@21 -- # val= 00:10:29.011 07:11:02 -- accel/accel.sh@22 -- # case "$var" in 00:10:29.011 07:11:02 -- accel/accel.sh@20 -- # IFS=: 00:10:29.011 07:11:02 -- accel/accel.sh@20 -- # read -r var val 00:10:29.011 ************************************ 00:10:29.011 END TEST accel_dif_generate_copy 00:10:29.011 ************************************ 00:10:29.011 07:11:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:29.011 07:11:02 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:10:29.011 07:11:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:29.011 00:10:29.011 real 0m4.851s 00:10:29.011 user 0m4.323s 00:10:29.011 sys 0m0.377s 00:10:29.011 07:11:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:29.011 07:11:02 -- common/autotest_common.sh@10 -- # set +x 00:10:29.011 07:11:02 -- accel/accel.sh@107 -- # [[ y == y ]] 00:10:29.011 07:11:02 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:29.011 07:11:02 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:10:29.011 07:11:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:29.011 07:11:02 -- common/autotest_common.sh@10 -- # set +x 00:10:29.011 ************************************ 00:10:29.011 START TEST accel_comp 00:10:29.011 ************************************ 00:10:29.011 07:11:02 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:29.011 07:11:02 -- accel/accel.sh@16 -- # local accel_opc 00:10:29.011 07:11:02 -- accel/accel.sh@17 -- # local accel_module 00:10:29.011 07:11:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:29.011 07:11:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:29.011 07:11:02 -- accel/accel.sh@12 -- # build_accel_config 00:10:29.011 07:11:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:29.011 07:11:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:29.011 07:11:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:29.011 07:11:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:29.011 07:11:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:29.011 07:11:02 -- accel/accel.sh@41 -- # local IFS=, 00:10:29.011 07:11:02 -- accel/accel.sh@42 -- # jq -r . 00:10:29.011 [2024-02-13 07:11:02.502069] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:29.011 [2024-02-13 07:11:02.502382] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111281 ] 00:10:29.011 [2024-02-13 07:11:02.657219] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.269 [2024-02-13 07:11:02.844654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.269 [2024-02-13 07:11:02.845099] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:30.647 [2024-02-13 07:11:04.047323] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:31.214 07:11:04 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:31.214 00:10:31.214 SPDK Configuration: 00:10:31.214 Core mask: 0x1 00:10:31.214 00:10:31.214 Accel Perf Configuration: 00:10:31.214 Workload Type: compress 00:10:31.214 Transfer size: 4096 bytes 00:10:31.214 Vector count 1 00:10:31.214 Module: software 00:10:31.214 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:31.214 Queue depth: 32 00:10:31.214 Allocate depth: 32 00:10:31.214 # threads/core: 1 00:10:31.214 Run time: 1 seconds 00:10:31.214 Verify: No 00:10:31.214 00:10:31.214 Running for 1 seconds... 00:10:31.214 00:10:31.214 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:31.214 ------------------------------------------------------------------------------------ 00:10:31.214 0,0 55744/s 232 MiB/s 0 0 00:10:31.214 ==================================================================================== 00:10:31.214 Total 55744/s 217 MiB/s 0 0' 00:10:31.214 07:11:04 -- accel/accel.sh@20 -- # IFS=: 00:10:31.214 07:11:04 -- accel/accel.sh@20 -- # read -r var val 00:10:31.214 07:11:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:31.214 07:11:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:31.214 07:11:04 -- accel/accel.sh@12 -- # build_accel_config 00:10:31.214 07:11:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:31.214 07:11:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:31.214 07:11:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:31.214 07:11:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:31.214 07:11:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:31.214 07:11:04 -- accel/accel.sh@41 -- # local IFS=, 00:10:31.214 07:11:04 -- accel/accel.sh@42 -- # jq -r . 00:10:31.214 [2024-02-13 07:11:04.860964] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:31.214 [2024-02-13 07:11:04.861355] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111322 ] 00:10:31.473 [2024-02-13 07:11:05.029037] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.740 [2024-02-13 07:11:05.205850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.740 [2024-02-13 07:11:05.206247] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val= 00:10:31.740 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val= 00:10:31.740 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val= 00:10:31.740 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val=0x1 00:10:31.740 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val= 00:10:31.740 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val= 00:10:31.740 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val=compress 00:10:31.740 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.740 07:11:05 -- accel/accel.sh@24 -- # accel_opc=compress 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:31.740 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val= 00:10:31.740 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val=software 00:10:31.740 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.740 07:11:05 -- accel/accel.sh@23 -- # accel_module=software 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:31.740 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val=32 00:10:31.740 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val=32 00:10:31.740 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val=1 00:10:31.740 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:31.740 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val=No 00:10:31.740 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val= 00:10:31.740 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.740 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:31.740 07:11:05 -- accel/accel.sh@21 -- # val= 00:10:31.741 07:11:05 -- accel/accel.sh@22 -- # case "$var" in 00:10:31.741 07:11:05 -- accel/accel.sh@20 -- # IFS=: 00:10:31.741 07:11:05 -- accel/accel.sh@20 -- # read -r var val 00:10:33.143 [2024-02-13 07:11:06.412420] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:33.710 07:11:07 -- accel/accel.sh@21 -- # val= 00:10:33.710 07:11:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.710 07:11:07 -- accel/accel.sh@20 -- # IFS=: 00:10:33.710 07:11:07 -- accel/accel.sh@20 -- # read -r var val 00:10:33.710 07:11:07 -- accel/accel.sh@21 -- # val= 00:10:33.710 07:11:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.710 07:11:07 -- accel/accel.sh@20 -- # IFS=: 00:10:33.710 07:11:07 -- accel/accel.sh@20 -- # read -r var val 00:10:33.710 07:11:07 -- accel/accel.sh@21 -- # val= 00:10:33.710 07:11:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.710 07:11:07 -- accel/accel.sh@20 -- # IFS=: 00:10:33.710 07:11:07 -- accel/accel.sh@20 -- # read -r var val 00:10:33.710 07:11:07 -- accel/accel.sh@21 -- # val= 00:10:33.710 07:11:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.710 07:11:07 -- accel/accel.sh@20 -- # IFS=: 00:10:33.710 07:11:07 -- accel/accel.sh@20 -- # read -r var val 00:10:33.710 07:11:07 -- accel/accel.sh@21 -- # val= 00:10:33.710 07:11:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.710 07:11:07 -- accel/accel.sh@20 -- # IFS=: 00:10:33.710 07:11:07 -- accel/accel.sh@20 -- # read -r var val 00:10:33.710 07:11:07 -- accel/accel.sh@21 -- # val= 00:10:33.710 07:11:07 -- accel/accel.sh@22 -- # case "$var" in 00:10:33.710 07:11:07 -- accel/accel.sh@20 -- # IFS=: 00:10:33.710 07:11:07 -- accel/accel.sh@20 -- # read -r var val 00:10:33.710 ************************************ 00:10:33.710 END TEST accel_comp 00:10:33.710 ************************************ 00:10:33.710 07:11:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:33.710 07:11:07 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:10:33.710 07:11:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:33.710 00:10:33.710 real 0m4.702s 00:10:33.710 user 0m4.145s 00:10:33.710 sys 0m0.386s 00:10:33.710 07:11:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:33.710 07:11:07 -- common/autotest_common.sh@10 -- # set +x 00:10:33.710 07:11:07 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:33.710 07:11:07 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:10:33.710 07:11:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:33.710 07:11:07 -- common/autotest_common.sh@10 -- # set +x 00:10:33.710 ************************************ 00:10:33.710 START TEST accel_decomp 00:10:33.710 ************************************ 00:10:33.710 07:11:07 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:33.710 07:11:07 -- accel/accel.sh@16 -- # local accel_opc 00:10:33.710 07:11:07 -- accel/accel.sh@17 -- # local accel_module 00:10:33.710 07:11:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:33.710 07:11:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:33.710 07:11:07 -- accel/accel.sh@12 -- # build_accel_config 00:10:33.710 07:11:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:33.710 07:11:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:33.710 07:11:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:33.710 07:11:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:33.710 07:11:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:33.710 07:11:07 -- accel/accel.sh@41 -- # local IFS=, 00:10:33.710 07:11:07 -- accel/accel.sh@42 -- # jq -r . 00:10:33.710 [2024-02-13 07:11:07.264119] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:33.710 [2024-02-13 07:11:07.264563] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111369 ] 00:10:33.969 [2024-02-13 07:11:07.429849] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.969 [2024-02-13 07:11:07.644552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.969 [2024-02-13 07:11:07.644908] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:35.344 [2024-02-13 07:11:08.840746] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:35.911 07:11:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:35.911 00:10:35.911 SPDK Configuration: 00:10:35.911 Core mask: 0x1 00:10:35.911 00:10:35.911 Accel Perf Configuration: 00:10:35.911 Workload Type: decompress 00:10:35.911 Transfer size: 4096 bytes 00:10:35.911 Vector count 1 00:10:35.911 Module: software 00:10:35.911 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:35.911 Queue depth: 32 00:10:35.911 Allocate depth: 32 00:10:35.911 # threads/core: 1 00:10:35.911 Run time: 1 seconds 00:10:35.911 Verify: Yes 00:10:35.911 00:10:35.911 Running for 1 seconds... 00:10:35.911 00:10:35.911 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:35.911 ------------------------------------------------------------------------------------ 00:10:35.911 0,0 70688/s 130 MiB/s 0 0 00:10:35.911 ==================================================================================== 00:10:35.911 Total 70688/s 276 MiB/s 0 0' 00:10:35.911 07:11:09 -- accel/accel.sh@20 -- # IFS=: 00:10:35.911 07:11:09 -- accel/accel.sh@20 -- # read -r var val 00:10:35.911 07:11:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:35.911 07:11:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:10:35.911 07:11:09 -- accel/accel.sh@12 -- # build_accel_config 00:10:35.911 07:11:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:35.911 07:11:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:35.911 07:11:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:35.911 07:11:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:35.911 07:11:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:35.911 07:11:09 -- accel/accel.sh@41 -- # local IFS=, 00:10:35.911 07:11:09 -- accel/accel.sh@42 -- # jq -r . 00:10:36.170 [2024-02-13 07:11:09.631808] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:36.170 [2024-02-13 07:11:09.632139] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111403 ] 00:10:36.170 [2024-02-13 07:11:09.800374] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.428 [2024-02-13 07:11:10.015287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.428 [2024-02-13 07:11:10.015685] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val= 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val= 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val= 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val=0x1 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val= 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val= 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val=decompress 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val= 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val=software 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@23 -- # accel_module=software 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val=32 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val=32 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val=1 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val=Yes 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val= 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:36.687 07:11:10 -- accel/accel.sh@21 -- # val= 00:10:36.687 07:11:10 -- accel/accel.sh@22 -- # case "$var" in 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # IFS=: 00:10:36.687 07:11:10 -- accel/accel.sh@20 -- # read -r var val 00:10:37.624 [2024-02-13 07:11:11.227686] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:38.560 07:11:11 -- accel/accel.sh@21 -- # val= 00:10:38.560 07:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.560 07:11:11 -- accel/accel.sh@20 -- # IFS=: 00:10:38.560 07:11:11 -- accel/accel.sh@20 -- # read -r var val 00:10:38.560 07:11:11 -- accel/accel.sh@21 -- # val= 00:10:38.560 07:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.560 07:11:11 -- accel/accel.sh@20 -- # IFS=: 00:10:38.560 07:11:11 -- accel/accel.sh@20 -- # read -r var val 00:10:38.560 07:11:11 -- accel/accel.sh@21 -- # val= 00:10:38.560 07:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.560 07:11:11 -- accel/accel.sh@20 -- # IFS=: 00:10:38.560 07:11:11 -- accel/accel.sh@20 -- # read -r var val 00:10:38.560 07:11:11 -- accel/accel.sh@21 -- # val= 00:10:38.560 07:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.560 07:11:11 -- accel/accel.sh@20 -- # IFS=: 00:10:38.560 07:11:11 -- accel/accel.sh@20 -- # read -r var val 00:10:38.560 07:11:11 -- accel/accel.sh@21 -- # val= 00:10:38.560 07:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.560 07:11:11 -- accel/accel.sh@20 -- # IFS=: 00:10:38.560 07:11:11 -- accel/accel.sh@20 -- # read -r var val 00:10:38.560 07:11:11 -- accel/accel.sh@21 -- # val= 00:10:38.560 07:11:11 -- accel/accel.sh@22 -- # case "$var" in 00:10:38.560 07:11:11 -- accel/accel.sh@20 -- # IFS=: 00:10:38.560 07:11:11 -- accel/accel.sh@20 -- # read -r var val 00:10:38.560 07:11:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:38.560 07:11:12 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:38.560 07:11:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:38.560 00:10:38.560 real 0m4.789s 00:10:38.560 user 0m4.256s 00:10:38.560 sys 0m0.371s 00:10:38.560 07:11:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:38.560 ************************************ 00:10:38.560 END TEST accel_decomp 00:10:38.560 ************************************ 00:10:38.560 07:11:12 -- common/autotest_common.sh@10 -- # set +x 00:10:38.560 07:11:12 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:38.560 07:11:12 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:10:38.560 07:11:12 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:38.560 07:11:12 -- common/autotest_common.sh@10 -- # set +x 00:10:38.560 ************************************ 00:10:38.560 START TEST accel_decmop_full 00:10:38.560 ************************************ 00:10:38.560 07:11:12 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:38.560 07:11:12 -- accel/accel.sh@16 -- # local accel_opc 00:10:38.560 07:11:12 -- accel/accel.sh@17 -- # local accel_module 00:10:38.560 07:11:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:38.560 07:11:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:38.560 07:11:12 -- accel/accel.sh@12 -- # build_accel_config 00:10:38.560 07:11:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:38.560 07:11:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:38.560 07:11:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:38.560 07:11:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:38.560 07:11:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:38.560 07:11:12 -- accel/accel.sh@41 -- # local IFS=, 00:10:38.560 07:11:12 -- accel/accel.sh@42 -- # jq -r . 00:10:38.560 [2024-02-13 07:11:12.117791] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:38.560 [2024-02-13 07:11:12.119217] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111462 ] 00:10:38.820 [2024-02-13 07:11:12.293959] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.820 [2024-02-13 07:11:12.493948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.820 [2024-02-13 07:11:12.494348] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:40.200 [2024-02-13 07:11:13.709506] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:41.136 07:11:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:41.136 00:10:41.136 SPDK Configuration: 00:10:41.136 Core mask: 0x1 00:10:41.136 00:10:41.136 Accel Perf Configuration: 00:10:41.136 Workload Type: decompress 00:10:41.136 Transfer size: 111250 bytes 00:10:41.136 Vector count 1 00:10:41.136 Module: software 00:10:41.136 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:41.136 Queue depth: 32 00:10:41.136 Allocate depth: 32 00:10:41.136 # threads/core: 1 00:10:41.136 Run time: 1 seconds 00:10:41.136 Verify: Yes 00:10:41.136 00:10:41.136 Running for 1 seconds... 00:10:41.136 00:10:41.136 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:41.136 ------------------------------------------------------------------------------------ 00:10:41.136 0,0 5120/s 211 MiB/s 0 0 00:10:41.136 ==================================================================================== 00:10:41.136 Total 5120/s 543 MiB/s 0 0' 00:10:41.136 07:11:14 -- accel/accel.sh@20 -- # IFS=: 00:10:41.136 07:11:14 -- accel/accel.sh@20 -- # read -r var val 00:10:41.136 07:11:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:41.136 07:11:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:10:41.136 07:11:14 -- accel/accel.sh@12 -- # build_accel_config 00:10:41.136 07:11:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:41.136 07:11:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:41.136 07:11:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:41.136 07:11:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:41.136 07:11:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:41.136 07:11:14 -- accel/accel.sh@41 -- # local IFS=, 00:10:41.136 07:11:14 -- accel/accel.sh@42 -- # jq -r . 00:10:41.136 [2024-02-13 07:11:14.514902] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:41.136 [2024-02-13 07:11:14.515297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111515 ] 00:10:41.136 [2024-02-13 07:11:14.682094] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.396 [2024-02-13 07:11:14.870256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.396 [2024-02-13 07:11:14.870715] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val= 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val= 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val= 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val=0x1 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val= 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val= 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val=decompress 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val= 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val=software 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@23 -- # accel_module=software 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val=32 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val=32 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val=1 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val=Yes 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val= 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:41.396 07:11:15 -- accel/accel.sh@21 -- # val= 00:10:41.396 07:11:15 -- accel/accel.sh@22 -- # case "$var" in 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # IFS=: 00:10:41.396 07:11:15 -- accel/accel.sh@20 -- # read -r var val 00:10:42.771 [2024-02-13 07:11:16.089768] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:43.339 07:11:16 -- accel/accel.sh@21 -- # val= 00:10:43.339 07:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.339 07:11:16 -- accel/accel.sh@20 -- # IFS=: 00:10:43.339 07:11:16 -- accel/accel.sh@20 -- # read -r var val 00:10:43.339 07:11:16 -- accel/accel.sh@21 -- # val= 00:10:43.339 07:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.339 07:11:16 -- accel/accel.sh@20 -- # IFS=: 00:10:43.339 07:11:16 -- accel/accel.sh@20 -- # read -r var val 00:10:43.339 07:11:16 -- accel/accel.sh@21 -- # val= 00:10:43.339 07:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.339 07:11:16 -- accel/accel.sh@20 -- # IFS=: 00:10:43.339 07:11:16 -- accel/accel.sh@20 -- # read -r var val 00:10:43.339 07:11:16 -- accel/accel.sh@21 -- # val= 00:10:43.339 07:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.339 07:11:16 -- accel/accel.sh@20 -- # IFS=: 00:10:43.339 07:11:16 -- accel/accel.sh@20 -- # read -r var val 00:10:43.339 07:11:16 -- accel/accel.sh@21 -- # val= 00:10:43.339 07:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.339 07:11:16 -- accel/accel.sh@20 -- # IFS=: 00:10:43.339 07:11:16 -- accel/accel.sh@20 -- # read -r var val 00:10:43.339 07:11:16 -- accel/accel.sh@21 -- # val= 00:10:43.339 07:11:16 -- accel/accel.sh@22 -- # case "$var" in 00:10:43.339 07:11:16 -- accel/accel.sh@20 -- # IFS=: 00:10:43.339 07:11:16 -- accel/accel.sh@20 -- # read -r var val 00:10:43.339 ************************************ 00:10:43.339 END TEST accel_decmop_full 00:10:43.339 ************************************ 00:10:43.339 07:11:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:43.339 07:11:16 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:43.339 07:11:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:43.339 00:10:43.339 real 0m4.812s 00:10:43.339 user 0m4.256s 00:10:43.339 sys 0m0.394s 00:10:43.339 07:11:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:43.339 07:11:16 -- common/autotest_common.sh@10 -- # set +x 00:10:43.339 07:11:16 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:43.339 07:11:16 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:10:43.339 07:11:16 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:43.339 07:11:16 -- common/autotest_common.sh@10 -- # set +x 00:10:43.339 ************************************ 00:10:43.339 START TEST accel_decomp_mcore 00:10:43.339 ************************************ 00:10:43.339 07:11:16 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:43.339 07:11:16 -- accel/accel.sh@16 -- # local accel_opc 00:10:43.339 07:11:16 -- accel/accel.sh@17 -- # local accel_module 00:10:43.339 07:11:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:43.339 07:11:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:43.339 07:11:16 -- accel/accel.sh@12 -- # build_accel_config 00:10:43.339 07:11:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:43.339 07:11:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:43.339 07:11:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:43.339 07:11:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:43.339 07:11:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:43.339 07:11:16 -- accel/accel.sh@41 -- # local IFS=, 00:10:43.339 07:11:16 -- accel/accel.sh@42 -- # jq -r . 00:10:43.339 [2024-02-13 07:11:16.982995] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:43.339 [2024-02-13 07:11:16.984043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111569 ] 00:10:43.598 [2024-02-13 07:11:17.178033] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.857 [2024-02-13 07:11:17.399958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.857 [2024-02-13 07:11:17.400136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.857 [2024-02-13 07:11:17.400160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.857 [2024-02-13 07:11:17.400159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.857 [2024-02-13 07:11:17.400422] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:45.234 [2024-02-13 07:11:18.615032] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:45.810 07:11:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:45.810 00:10:45.810 SPDK Configuration: 00:10:45.810 Core mask: 0xf 00:10:45.810 00:10:45.810 Accel Perf Configuration: 00:10:45.810 Workload Type: decompress 00:10:45.810 Transfer size: 4096 bytes 00:10:45.810 Vector count 1 00:10:45.810 Module: software 00:10:45.810 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:45.810 Queue depth: 32 00:10:45.810 Allocate depth: 32 00:10:45.810 # threads/core: 1 00:10:45.810 Run time: 1 seconds 00:10:45.810 Verify: Yes 00:10:45.810 00:10:45.810 Running for 1 seconds... 00:10:45.810 00:10:45.810 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:45.810 ------------------------------------------------------------------------------------ 00:10:45.810 0,0 49120/s 90 MiB/s 0 0 00:10:45.810 3,0 44608/s 82 MiB/s 0 0 00:10:45.810 2,0 43392/s 79 MiB/s 0 0 00:10:45.810 1,0 49408/s 91 MiB/s 0 0 00:10:45.810 ==================================================================================== 00:10:45.810 Total 186528/s 728 MiB/s 0 0' 00:10:45.810 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:45.811 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:45.811 07:11:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:45.811 07:11:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:10:45.811 07:11:19 -- accel/accel.sh@12 -- # build_accel_config 00:10:45.811 07:11:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:45.811 07:11:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:45.811 07:11:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:45.811 07:11:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:45.811 07:11:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:45.811 07:11:19 -- accel/accel.sh@41 -- # local IFS=, 00:10:45.811 07:11:19 -- accel/accel.sh@42 -- # jq -r . 00:10:45.811 [2024-02-13 07:11:19.408250] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:45.811 [2024-02-13 07:11:19.408629] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111606 ] 00:10:46.079 [2024-02-13 07:11:19.586061] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.338 [2024-02-13 07:11:19.781122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.338 [2024-02-13 07:11:19.781275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.338 [2024-02-13 07:11:19.781411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.338 [2024-02-13 07:11:19.781721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.338 [2024-02-13 07:11:19.782470] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val= 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val= 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val= 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val=0xf 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val= 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val= 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val=decompress 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val= 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val=software 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@23 -- # accel_module=software 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val=32 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val=32 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val=1 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val=Yes 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val= 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:46.338 07:11:19 -- accel/accel.sh@21 -- # val= 00:10:46.338 07:11:19 -- accel/accel.sh@22 -- # case "$var" in 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # IFS=: 00:10:46.338 07:11:19 -- accel/accel.sh@20 -- # read -r var val 00:10:47.716 [2024-02-13 07:11:20.995075] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:48.284 07:11:21 -- accel/accel.sh@21 -- # val= 00:10:48.284 07:11:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # IFS=: 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # read -r var val 00:10:48.284 07:11:21 -- accel/accel.sh@21 -- # val= 00:10:48.284 07:11:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # IFS=: 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # read -r var val 00:10:48.284 07:11:21 -- accel/accel.sh@21 -- # val= 00:10:48.284 07:11:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # IFS=: 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # read -r var val 00:10:48.284 07:11:21 -- accel/accel.sh@21 -- # val= 00:10:48.284 07:11:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # IFS=: 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # read -r var val 00:10:48.284 07:11:21 -- accel/accel.sh@21 -- # val= 00:10:48.284 07:11:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # IFS=: 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # read -r var val 00:10:48.284 07:11:21 -- accel/accel.sh@21 -- # val= 00:10:48.284 07:11:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # IFS=: 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # read -r var val 00:10:48.284 07:11:21 -- accel/accel.sh@21 -- # val= 00:10:48.284 07:11:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # IFS=: 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # read -r var val 00:10:48.284 07:11:21 -- accel/accel.sh@21 -- # val= 00:10:48.284 07:11:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # IFS=: 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # read -r var val 00:10:48.284 07:11:21 -- accel/accel.sh@21 -- # val= 00:10:48.284 07:11:21 -- accel/accel.sh@22 -- # case "$var" in 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # IFS=: 00:10:48.284 07:11:21 -- accel/accel.sh@20 -- # read -r var val 00:10:48.284 ************************************ 00:10:48.284 END TEST accel_decomp_mcore 00:10:48.284 ************************************ 00:10:48.284 07:11:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:48.284 07:11:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:48.284 07:11:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:48.284 00:10:48.284 real 0m4.826s 00:10:48.284 user 0m14.105s 00:10:48.284 sys 0m0.410s 00:10:48.284 07:11:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:48.284 07:11:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.284 07:11:21 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:48.284 07:11:21 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:10:48.284 07:11:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:48.284 07:11:21 -- common/autotest_common.sh@10 -- # set +x 00:10:48.284 ************************************ 00:10:48.284 START TEST accel_decomp_full_mcore 00:10:48.284 ************************************ 00:10:48.284 07:11:21 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:48.284 07:11:21 -- accel/accel.sh@16 -- # local accel_opc 00:10:48.284 07:11:21 -- accel/accel.sh@17 -- # local accel_module 00:10:48.284 07:11:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:48.284 07:11:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:48.284 07:11:21 -- accel/accel.sh@12 -- # build_accel_config 00:10:48.284 07:11:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:48.284 07:11:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:48.284 07:11:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:48.284 07:11:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:48.284 07:11:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:48.284 07:11:21 -- accel/accel.sh@41 -- # local IFS=, 00:10:48.284 07:11:21 -- accel/accel.sh@42 -- # jq -r . 00:10:48.284 [2024-02-13 07:11:21.853205] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:48.284 [2024-02-13 07:11:21.853554] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111662 ] 00:10:48.543 [2024-02-13 07:11:22.027844] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.543 [2024-02-13 07:11:22.212548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:48.543 [2024-02-13 07:11:22.212699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:48.543 [2024-02-13 07:11:22.212853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.543 [2024-02-13 07:11:22.212994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.543 [2024-02-13 07:11:22.214742] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:49.918 [2024-02-13 07:11:23.460396] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:50.854 07:11:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:50.854 00:10:50.854 SPDK Configuration: 00:10:50.854 Core mask: 0xf 00:10:50.854 00:10:50.854 Accel Perf Configuration: 00:10:50.854 Workload Type: decompress 00:10:50.854 Transfer size: 111250 bytes 00:10:50.854 Vector count 1 00:10:50.854 Module: software 00:10:50.854 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:50.854 Queue depth: 32 00:10:50.854 Allocate depth: 32 00:10:50.854 # threads/core: 1 00:10:50.854 Run time: 1 seconds 00:10:50.854 Verify: Yes 00:10:50.854 00:10:50.854 Running for 1 seconds... 00:10:50.854 00:10:50.854 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:50.854 ------------------------------------------------------------------------------------ 00:10:50.854 0,0 4928/s 203 MiB/s 0 0 00:10:50.854 3,0 4480/s 185 MiB/s 0 0 00:10:50.854 2,0 4864/s 200 MiB/s 0 0 00:10:50.854 1,0 4928/s 203 MiB/s 0 0 00:10:50.854 ==================================================================================== 00:10:50.854 Total 19200/s 2037 MiB/s 0 0' 00:10:50.854 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:50.854 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:50.854 07:11:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:50.854 07:11:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:50.854 07:11:24 -- accel/accel.sh@12 -- # build_accel_config 00:10:50.854 07:11:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:50.854 07:11:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:50.854 07:11:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:50.854 07:11:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:50.854 07:11:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:50.854 07:11:24 -- accel/accel.sh@41 -- # local IFS=, 00:10:50.854 07:11:24 -- accel/accel.sh@42 -- # jq -r . 00:10:50.854 [2024-02-13 07:11:24.268518] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:50.854 [2024-02-13 07:11:24.268893] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111723 ] 00:10:50.854 [2024-02-13 07:11:24.441877] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.113 [2024-02-13 07:11:24.619987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.113 [2024-02-13 07:11:24.620131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.113 [2024-02-13 07:11:24.620271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.113 [2024-02-13 07:11:24.620565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.113 [2024-02-13 07:11:24.621660] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val= 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val= 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val= 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val=0xf 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val= 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val= 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val=decompress 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val='111250 bytes' 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val= 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val=software 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@23 -- # accel_module=software 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val=32 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val=32 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val=1 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val=Yes 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val= 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:51.372 07:11:24 -- accel/accel.sh@21 -- # val= 00:10:51.372 07:11:24 -- accel/accel.sh@22 -- # case "$var" in 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # IFS=: 00:10:51.372 07:11:24 -- accel/accel.sh@20 -- # read -r var val 00:10:52.309 [2024-02-13 07:11:25.860256] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:53.245 07:11:26 -- accel/accel.sh@21 -- # val= 00:10:53.245 07:11:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # IFS=: 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # read -r var val 00:10:53.245 07:11:26 -- accel/accel.sh@21 -- # val= 00:10:53.245 07:11:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # IFS=: 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # read -r var val 00:10:53.245 07:11:26 -- accel/accel.sh@21 -- # val= 00:10:53.245 07:11:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # IFS=: 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # read -r var val 00:10:53.245 07:11:26 -- accel/accel.sh@21 -- # val= 00:10:53.245 07:11:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # IFS=: 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # read -r var val 00:10:53.245 07:11:26 -- accel/accel.sh@21 -- # val= 00:10:53.245 07:11:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # IFS=: 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # read -r var val 00:10:53.245 07:11:26 -- accel/accel.sh@21 -- # val= 00:10:53.245 07:11:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # IFS=: 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # read -r var val 00:10:53.245 07:11:26 -- accel/accel.sh@21 -- # val= 00:10:53.245 07:11:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # IFS=: 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # read -r var val 00:10:53.245 07:11:26 -- accel/accel.sh@21 -- # val= 00:10:53.245 07:11:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # IFS=: 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # read -r var val 00:10:53.245 07:11:26 -- accel/accel.sh@21 -- # val= 00:10:53.245 07:11:26 -- accel/accel.sh@22 -- # case "$var" in 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # IFS=: 00:10:53.245 07:11:26 -- accel/accel.sh@20 -- # read -r var val 00:10:53.245 ************************************ 00:10:53.245 END TEST accel_decomp_full_mcore 00:10:53.245 ************************************ 00:10:53.245 07:11:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:53.245 07:11:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:53.245 07:11:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:53.245 00:10:53.245 real 0m4.830s 00:10:53.245 user 0m14.253s 00:10:53.245 sys 0m0.456s 00:10:53.245 07:11:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:53.245 07:11:26 -- common/autotest_common.sh@10 -- # set +x 00:10:53.245 07:11:26 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:53.245 07:11:26 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:10:53.245 07:11:26 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:53.245 07:11:26 -- common/autotest_common.sh@10 -- # set +x 00:10:53.246 ************************************ 00:10:53.246 START TEST accel_decomp_mthread 00:10:53.246 ************************************ 00:10:53.246 07:11:26 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:53.246 07:11:26 -- accel/accel.sh@16 -- # local accel_opc 00:10:53.246 07:11:26 -- accel/accel.sh@17 -- # local accel_module 00:10:53.246 07:11:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:53.246 07:11:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:53.246 07:11:26 -- accel/accel.sh@12 -- # build_accel_config 00:10:53.246 07:11:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:53.246 07:11:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:53.246 07:11:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:53.246 07:11:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:53.246 07:11:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:53.246 07:11:26 -- accel/accel.sh@41 -- # local IFS=, 00:10:53.246 07:11:26 -- accel/accel.sh@42 -- # jq -r . 00:10:53.246 [2024-02-13 07:11:26.742077] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:53.246 [2024-02-13 07:11:26.742437] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111773 ] 00:10:53.246 [2024-02-13 07:11:26.908481] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.504 [2024-02-13 07:11:27.092081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.504 [2024-02-13 07:11:27.092966] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:54.917 [2024-02-13 07:11:28.286458] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:55.485 07:11:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:10:55.485 00:10:55.485 SPDK Configuration: 00:10:55.485 Core mask: 0x1 00:10:55.485 00:10:55.485 Accel Perf Configuration: 00:10:55.485 Workload Type: decompress 00:10:55.485 Transfer size: 4096 bytes 00:10:55.485 Vector count 1 00:10:55.485 Module: software 00:10:55.485 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:55.485 Queue depth: 32 00:10:55.485 Allocate depth: 32 00:10:55.485 # threads/core: 2 00:10:55.485 Run time: 1 seconds 00:10:55.485 Verify: Yes 00:10:55.485 00:10:55.485 Running for 1 seconds... 00:10:55.485 00:10:55.485 Core,Thread Transfers Bandwidth Failed Miscompares 00:10:55.485 ------------------------------------------------------------------------------------ 00:10:55.485 0,1 37728/s 69 MiB/s 0 0 00:10:55.485 0,0 37568/s 69 MiB/s 0 0 00:10:55.485 ==================================================================================== 00:10:55.485 Total 75296/s 294 MiB/s 0 0' 00:10:55.485 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:55.485 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:55.485 07:11:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:55.485 07:11:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:10:55.485 07:11:29 -- accel/accel.sh@12 -- # build_accel_config 00:10:55.485 07:11:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:55.485 07:11:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:55.485 07:11:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:55.485 07:11:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:55.485 07:11:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:55.485 07:11:29 -- accel/accel.sh@41 -- # local IFS=, 00:10:55.485 07:11:29 -- accel/accel.sh@42 -- # jq -r . 00:10:55.485 [2024-02-13 07:11:29.075336] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:55.485 [2024-02-13 07:11:29.075540] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111814 ] 00:10:55.744 [2024-02-13 07:11:29.234121] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.744 [2024-02-13 07:11:29.403533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.744 [2024-02-13 07:11:29.404406] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val= 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val= 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val= 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val=0x1 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val= 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val= 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val=decompress 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val= 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val=software 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@23 -- # accel_module=software 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val=32 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val=32 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val=2 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val=Yes 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val= 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.004 07:11:29 -- accel/accel.sh@21 -- # val= 00:10:56.004 07:11:29 -- accel/accel.sh@22 -- # case "$var" in 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # IFS=: 00:10:56.004 07:11:29 -- accel/accel.sh@20 -- # read -r var val 00:10:56.942 [2024-02-13 07:11:30.602559] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:10:57.879 07:11:31 -- accel/accel.sh@21 -- # val= 00:10:57.879 07:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.879 07:11:31 -- accel/accel.sh@20 -- # IFS=: 00:10:57.879 07:11:31 -- accel/accel.sh@20 -- # read -r var val 00:10:57.879 07:11:31 -- accel/accel.sh@21 -- # val= 00:10:57.879 07:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.879 07:11:31 -- accel/accel.sh@20 -- # IFS=: 00:10:57.879 07:11:31 -- accel/accel.sh@20 -- # read -r var val 00:10:57.879 07:11:31 -- accel/accel.sh@21 -- # val= 00:10:57.879 07:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.879 07:11:31 -- accel/accel.sh@20 -- # IFS=: 00:10:57.879 07:11:31 -- accel/accel.sh@20 -- # read -r var val 00:10:57.879 07:11:31 -- accel/accel.sh@21 -- # val= 00:10:57.879 07:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.879 07:11:31 -- accel/accel.sh@20 -- # IFS=: 00:10:57.879 07:11:31 -- accel/accel.sh@20 -- # read -r var val 00:10:57.879 07:11:31 -- accel/accel.sh@21 -- # val= 00:10:57.879 07:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.879 07:11:31 -- accel/accel.sh@20 -- # IFS=: 00:10:57.879 07:11:31 -- accel/accel.sh@20 -- # read -r var val 00:10:57.879 07:11:31 -- accel/accel.sh@21 -- # val= 00:10:57.879 07:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.879 07:11:31 -- accel/accel.sh@20 -- # IFS=: 00:10:57.879 07:11:31 -- accel/accel.sh@20 -- # read -r var val 00:10:57.879 07:11:31 -- accel/accel.sh@21 -- # val= 00:10:57.879 07:11:31 -- accel/accel.sh@22 -- # case "$var" in 00:10:57.879 07:11:31 -- accel/accel.sh@20 -- # IFS=: 00:10:57.879 07:11:31 -- accel/accel.sh@20 -- # read -r var val 00:10:57.879 07:11:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:10:57.879 07:11:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:10:57.879 07:11:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:10:57.879 00:10:57.879 real 0m4.633s 00:10:57.879 user 0m4.106s 00:10:57.879 sys 0m0.351s 00:10:57.879 07:11:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:57.879 07:11:31 -- common/autotest_common.sh@10 -- # set +x 00:10:57.879 ************************************ 00:10:57.879 END TEST accel_decomp_mthread 00:10:57.879 ************************************ 00:10:57.879 07:11:31 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:57.879 07:11:31 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:10:57.879 07:11:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:10:57.879 07:11:31 -- common/autotest_common.sh@10 -- # set +x 00:10:57.879 ************************************ 00:10:57.879 START TEST accel_deomp_full_mthread 00:10:57.879 ************************************ 00:10:57.879 07:11:31 -- common/autotest_common.sh@1102 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:57.879 07:11:31 -- accel/accel.sh@16 -- # local accel_opc 00:10:57.879 07:11:31 -- accel/accel.sh@17 -- # local accel_module 00:10:57.879 07:11:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:57.879 07:11:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:10:57.879 07:11:31 -- accel/accel.sh@12 -- # build_accel_config 00:10:57.879 07:11:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:10:57.879 07:11:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:10:57.879 07:11:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:57.879 07:11:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:10:57.879 07:11:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:10:57.879 07:11:31 -- accel/accel.sh@41 -- # local IFS=, 00:10:57.879 07:11:31 -- accel/accel.sh@42 -- # jq -r . 00:10:57.879 [2024-02-13 07:11:31.431819] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:10:57.879 [2024-02-13 07:11:31.432038] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111861 ] 00:10:58.138 [2024-02-13 07:11:31.599264] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.138 [2024-02-13 07:11:31.785271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.138 [2024-02-13 07:11:31.785667] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:10:59.532 [2024-02-13 07:11:33.008287] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:11:00.097 07:11:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:11:00.097 00:11:00.097 SPDK Configuration: 00:11:00.097 Core mask: 0x1 00:11:00.097 00:11:00.097 Accel Perf Configuration: 00:11:00.097 Workload Type: decompress 00:11:00.097 Transfer size: 111250 bytes 00:11:00.097 Vector count 1 00:11:00.097 Module: software 00:11:00.097 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:00.097 Queue depth: 32 00:11:00.097 Allocate depth: 32 00:11:00.097 # threads/core: 2 00:11:00.097 Run time: 1 seconds 00:11:00.097 Verify: Yes 00:11:00.097 00:11:00.097 Running for 1 seconds... 00:11:00.097 00:11:00.097 Core,Thread Transfers Bandwidth Failed Miscompares 00:11:00.097 ------------------------------------------------------------------------------------ 00:11:00.097 0,1 2816/s 116 MiB/s 0 0 00:11:00.097 0,0 2752/s 113 MiB/s 0 0 00:11:00.097 ==================================================================================== 00:11:00.097 Total 5568/s 590 MiB/s 0 0' 00:11:00.097 07:11:33 -- accel/accel.sh@20 -- # IFS=: 00:11:00.097 07:11:33 -- accel/accel.sh@20 -- # read -r var val 00:11:00.097 07:11:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:00.097 07:11:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:11:00.097 07:11:33 -- accel/accel.sh@12 -- # build_accel_config 00:11:00.097 07:11:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:00.097 07:11:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:00.097 07:11:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:00.097 07:11:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:00.097 07:11:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:00.097 07:11:33 -- accel/accel.sh@41 -- # local IFS=, 00:11:00.097 07:11:33 -- accel/accel.sh@42 -- # jq -r . 00:11:00.097 [2024-02-13 07:11:33.755743] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:11:00.097 [2024-02-13 07:11:33.756063] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111912 ] 00:11:00.355 [2024-02-13 07:11:33.909632] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.613 [2024-02-13 07:11:34.093814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.613 [2024-02-13 07:11:34.094148] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:11:00.613 07:11:34 -- accel/accel.sh@21 -- # val= 00:11:00.613 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.613 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.613 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:00.613 07:11:34 -- accel/accel.sh@21 -- # val= 00:11:00.613 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.613 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.613 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:00.613 07:11:34 -- accel/accel.sh@21 -- # val= 00:11:00.613 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.613 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.613 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:00.613 07:11:34 -- accel/accel.sh@21 -- # val=0x1 00:11:00.613 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.613 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.613 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:00.613 07:11:34 -- accel/accel.sh@21 -- # val= 00:11:00.613 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.613 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.613 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:00.614 07:11:34 -- accel/accel.sh@21 -- # val= 00:11:00.614 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:00.614 07:11:34 -- accel/accel.sh@21 -- # val=decompress 00:11:00.614 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.614 07:11:34 -- accel/accel.sh@24 -- # accel_opc=decompress 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:00.614 07:11:34 -- accel/accel.sh@21 -- # val='111250 bytes' 00:11:00.614 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:00.614 07:11:34 -- accel/accel.sh@21 -- # val= 00:11:00.614 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:00.614 07:11:34 -- accel/accel.sh@21 -- # val=software 00:11:00.614 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.614 07:11:34 -- accel/accel.sh@23 -- # accel_module=software 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:00.614 07:11:34 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:11:00.614 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:00.614 07:11:34 -- accel/accel.sh@21 -- # val=32 00:11:00.614 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:00.614 07:11:34 -- accel/accel.sh@21 -- # val=32 00:11:00.614 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:00.614 07:11:34 -- accel/accel.sh@21 -- # val=2 00:11:00.614 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:00.614 07:11:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:11:00.614 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:00.614 07:11:34 -- accel/accel.sh@21 -- # val=Yes 00:11:00.614 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:00.614 07:11:34 -- accel/accel.sh@21 -- # val= 00:11:00.614 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:00.614 07:11:34 -- accel/accel.sh@21 -- # val= 00:11:00.614 07:11:34 -- accel/accel.sh@22 -- # case "$var" in 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # IFS=: 00:11:00.614 07:11:34 -- accel/accel.sh@20 -- # read -r var val 00:11:01.990 [2024-02-13 07:11:35.324729] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:11:02.558 07:11:36 -- accel/accel.sh@21 -- # val= 00:11:02.558 07:11:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.558 07:11:36 -- accel/accel.sh@20 -- # IFS=: 00:11:02.558 07:11:36 -- accel/accel.sh@20 -- # read -r var val 00:11:02.558 07:11:36 -- accel/accel.sh@21 -- # val= 00:11:02.558 07:11:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.558 07:11:36 -- accel/accel.sh@20 -- # IFS=: 00:11:02.558 07:11:36 -- accel/accel.sh@20 -- # read -r var val 00:11:02.558 07:11:36 -- accel/accel.sh@21 -- # val= 00:11:02.558 07:11:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.558 07:11:36 -- accel/accel.sh@20 -- # IFS=: 00:11:02.558 07:11:36 -- accel/accel.sh@20 -- # read -r var val 00:11:02.558 07:11:36 -- accel/accel.sh@21 -- # val= 00:11:02.558 07:11:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.558 07:11:36 -- accel/accel.sh@20 -- # IFS=: 00:11:02.558 07:11:36 -- accel/accel.sh@20 -- # read -r var val 00:11:02.558 07:11:36 -- accel/accel.sh@21 -- # val= 00:11:02.558 07:11:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.558 07:11:36 -- accel/accel.sh@20 -- # IFS=: 00:11:02.558 07:11:36 -- accel/accel.sh@20 -- # read -r var val 00:11:02.558 07:11:36 -- accel/accel.sh@21 -- # val= 00:11:02.558 07:11:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.558 07:11:36 -- accel/accel.sh@20 -- # IFS=: 00:11:02.558 07:11:36 -- accel/accel.sh@20 -- # read -r var val 00:11:02.558 07:11:36 -- accel/accel.sh@21 -- # val= 00:11:02.558 07:11:36 -- accel/accel.sh@22 -- # case "$var" in 00:11:02.558 07:11:36 -- accel/accel.sh@20 -- # IFS=: 00:11:02.558 07:11:36 -- accel/accel.sh@20 -- # read -r var val 00:11:02.558 07:11:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:11:02.558 07:11:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:11:02.558 07:11:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:11:02.558 00:11:02.558 real 0m4.653s 00:11:02.558 user 0m4.184s 00:11:02.558 sys 0m0.328s 00:11:02.558 07:11:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:02.558 ************************************ 00:11:02.558 END TEST accel_deomp_full_mthread 00:11:02.558 07:11:36 -- common/autotest_common.sh@10 -- # set +x 00:11:02.558 ************************************ 00:11:02.558 07:11:36 -- accel/accel.sh@116 -- # [[ n == y ]] 00:11:02.558 07:11:36 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:02.558 07:11:36 -- accel/accel.sh@129 -- # build_accel_config 00:11:02.558 07:11:36 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:11:02.558 07:11:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:02.558 07:11:36 -- common/autotest_common.sh@10 -- # set +x 00:11:02.558 07:11:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:11:02.558 07:11:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:11:02.558 07:11:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:02.558 07:11:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:11:02.558 07:11:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:11:02.558 07:11:36 -- accel/accel.sh@41 -- # local IFS=, 00:11:02.558 07:11:36 -- accel/accel.sh@42 -- # jq -r . 00:11:02.558 ************************************ 00:11:02.558 START TEST accel_dif_functional_tests 00:11:02.558 ************************************ 00:11:02.558 07:11:36 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:02.558 [2024-02-13 07:11:36.177438] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:11:02.558 [2024-02-13 07:11:36.178317] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111965 ] 00:11:02.818 [2024-02-13 07:11:36.353900] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:03.077 [2024-02-13 07:11:36.513848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.077 [2024-02-13 07:11:36.513974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.077 [2024-02-13 07:11:36.514223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.077 [2024-02-13 07:11:36.515361] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:11:03.336 00:11:03.336 00:11:03.336 CUnit - A unit testing framework for C - Version 2.1-3 00:11:03.336 http://cunit.sourceforge.net/ 00:11:03.336 00:11:03.336 00:11:03.336 Suite: accel_dif 00:11:03.336 Test: verify: DIF generated, GUARD check ...passed 00:11:03.336 Test: verify: DIF generated, APPTAG check ...passed 00:11:03.336 Test: verify: DIF generated, REFTAG check ...passed 00:11:03.336 Test: verify: DIF not generated, GUARD check ...[2024-02-13 07:11:36.790134] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:03.336 [2024-02-13 07:11:36.790860] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:03.336 passed 00:11:03.336 Test: verify: DIF not generated, APPTAG check ...[2024-02-13 07:11:36.791401] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:03.336 [2024-02-13 07:11:36.791772] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:03.336 passed 00:11:03.336 Test: verify: DIF not generated, REFTAG check ...[2024-02-13 07:11:36.792335] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:03.336 [2024-02-13 07:11:36.792709] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:03.336 passed 00:11:03.336 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:03.336 Test: verify: APPTAG incorrect, APPTAG check ...[2024-02-13 07:11:36.793424] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:03.336 passed 00:11:03.336 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:03.336 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:03.336 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:03.336 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-02-13 07:11:36.794477] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:03.336 passed 00:11:03.336 Test: generate copy: DIF generated, GUARD check ...passed 00:11:03.336 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:03.336 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:03.336 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:11:03.336 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:11:03.336 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:11:03.336 Test: generate copy: iovecs-len validate ...[2024-02-13 07:11:36.796385] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:11:03.336 passed 00:11:03.336 Test: generate copy: buffer alignment validate ...passed 00:11:03.336 00:11:03.336 Run Summary: Type Total Ran Passed Failed Inactive 00:11:03.336 suites 1 1 n/a 0 0 00:11:03.336 tests 20 20 20 0 0 00:11:03.336 asserts 204 204 204 0 n/a 00:11:03.336 00:11:03.336 Elapsed time = 0.012 seconds 00:11:03.336 [2024-02-13 07:11:36.797744] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:11:04.273 ************************************ 00:11:04.273 END TEST accel_dif_functional_tests 00:11:04.273 ************************************ 00:11:04.273 00:11:04.273 real 0m1.667s 00:11:04.273 user 0m3.158s 00:11:04.273 sys 0m0.259s 00:11:04.273 07:11:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:04.273 07:11:37 -- common/autotest_common.sh@10 -- # set +x 00:11:04.273 ************************************ 00:11:04.273 END TEST accel 00:11:04.273 ************************************ 00:11:04.273 00:11:04.273 real 1m48.476s 00:11:04.273 user 1m58.014s 00:11:04.273 sys 0m9.777s 00:11:04.273 07:11:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:04.273 07:11:37 -- common/autotest_common.sh@10 -- # set +x 00:11:04.273 07:11:37 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:04.273 07:11:37 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:11:04.273 07:11:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:04.273 07:11:37 -- common/autotest_common.sh@10 -- # set +x 00:11:04.273 ************************************ 00:11:04.273 START TEST accel_rpc 00:11:04.273 ************************************ 00:11:04.273 07:11:37 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:11:04.273 * Looking for test storage... 00:11:04.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:11:04.273 07:11:37 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:04.273 07:11:37 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=112049 00:11:04.273 07:11:37 -- accel/accel_rpc.sh@15 -- # waitforlisten 112049 00:11:04.273 07:11:37 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:04.273 07:11:37 -- common/autotest_common.sh@817 -- # '[' -z 112049 ']' 00:11:04.273 07:11:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.273 07:11:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:04.274 07:11:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.274 07:11:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:04.274 07:11:37 -- common/autotest_common.sh@10 -- # set +x 00:11:04.533 [2024-02-13 07:11:37.999723] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:11:04.533 [2024-02-13 07:11:38.000857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112049 ] 00:11:04.533 [2024-02-13 07:11:38.155067] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.792 [2024-02-13 07:11:38.330064] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:04.792 [2024-02-13 07:11:38.330935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.360 07:11:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:05.360 07:11:38 -- common/autotest_common.sh@850 -- # return 0 00:11:05.360 07:11:38 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:05.360 07:11:38 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:11:05.360 07:11:38 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:05.360 07:11:38 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:11:05.360 07:11:38 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:05.360 07:11:38 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:11:05.360 07:11:38 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:05.360 07:11:38 -- common/autotest_common.sh@10 -- # set +x 00:11:05.360 ************************************ 00:11:05.360 START TEST accel_assign_opcode 00:11:05.360 ************************************ 00:11:05.360 07:11:38 -- common/autotest_common.sh@1102 -- # accel_assign_opcode_test_suite 00:11:05.360 07:11:38 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:05.360 07:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.360 07:11:38 -- common/autotest_common.sh@10 -- # set +x 00:11:05.360 [2024-02-13 07:11:38.919999] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:05.360 07:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.360 07:11:38 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:05.360 07:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.360 07:11:38 -- common/autotest_common.sh@10 -- # set +x 00:11:05.360 [2024-02-13 07:11:38.927985] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:05.360 07:11:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.360 07:11:38 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:05.360 07:11:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.360 07:11:38 -- common/autotest_common.sh@10 -- # set +x 00:11:05.927 07:11:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:05.927 07:11:39 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:05.927 07:11:39 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:05.927 07:11:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:05.927 07:11:39 -- common/autotest_common.sh@10 -- # set +x 00:11:05.927 07:11:39 -- accel/accel_rpc.sh@42 -- # grep software 00:11:05.927 07:11:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:06.186 software 00:11:06.186 ************************************ 00:11:06.186 END TEST accel_assign_opcode 00:11:06.186 ************************************ 00:11:06.186 00:11:06.186 real 0m0.720s 00:11:06.186 user 0m0.060s 00:11:06.186 sys 0m0.011s 00:11:06.186 07:11:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:06.186 07:11:39 -- common/autotest_common.sh@10 -- # set +x 00:11:06.186 07:11:39 -- accel/accel_rpc.sh@55 -- # killprocess 112049 00:11:06.186 07:11:39 -- common/autotest_common.sh@924 -- # '[' -z 112049 ']' 00:11:06.186 07:11:39 -- common/autotest_common.sh@928 -- # kill -0 112049 00:11:06.186 07:11:39 -- common/autotest_common.sh@929 -- # uname 00:11:06.186 07:11:39 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:06.186 07:11:39 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 112049 00:11:06.186 killing process with pid 112049 00:11:06.186 07:11:39 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:11:06.186 07:11:39 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:11:06.186 07:11:39 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 112049' 00:11:06.186 07:11:39 -- common/autotest_common.sh@943 -- # kill 112049 00:11:06.186 07:11:39 -- common/autotest_common.sh@948 -- # wait 112049 00:11:08.090 ************************************ 00:11:08.090 END TEST accel_rpc 00:11:08.090 ************************************ 00:11:08.090 00:11:08.090 real 0m3.632s 00:11:08.090 user 0m3.606s 00:11:08.090 sys 0m0.518s 00:11:08.090 07:11:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:08.090 07:11:41 -- common/autotest_common.sh@10 -- # set +x 00:11:08.090 07:11:41 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:08.090 07:11:41 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:11:08.090 07:11:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:08.090 07:11:41 -- common/autotest_common.sh@10 -- # set +x 00:11:08.090 ************************************ 00:11:08.090 START TEST app_cmdline 00:11:08.090 ************************************ 00:11:08.090 07:11:41 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:11:08.090 * Looking for test storage... 00:11:08.090 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:08.090 07:11:41 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:08.090 07:11:41 -- app/cmdline.sh@17 -- # spdk_tgt_pid=112182 00:11:08.090 07:11:41 -- app/cmdline.sh@18 -- # waitforlisten 112182 00:11:08.090 07:11:41 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:08.090 07:11:41 -- common/autotest_common.sh@817 -- # '[' -z 112182 ']' 00:11:08.090 07:11:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.090 07:11:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:08.090 07:11:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.090 07:11:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:08.090 07:11:41 -- common/autotest_common.sh@10 -- # set +x 00:11:08.090 [2024-02-13 07:11:41.700881] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:11:08.090 [2024-02-13 07:11:41.701280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112182 ] 00:11:08.350 [2024-02-13 07:11:41.864685] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.350 [2024-02-13 07:11:42.029386] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:08.350 [2024-02-13 07:11:42.029900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.728 07:11:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:09.728 07:11:43 -- common/autotest_common.sh@850 -- # return 0 00:11:09.728 07:11:43 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:11:09.987 { 00:11:09.987 "version": "SPDK v24.05-pre git sha1 3bec6cb23", 00:11:09.987 "fields": { 00:11:09.987 "major": 24, 00:11:09.987 "minor": 5, 00:11:09.987 "patch": 0, 00:11:09.987 "suffix": "-pre", 00:11:09.987 "commit": "3bec6cb23" 00:11:09.987 } 00:11:09.987 } 00:11:09.987 07:11:43 -- app/cmdline.sh@22 -- # expected_methods=() 00:11:09.987 07:11:43 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:09.987 07:11:43 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:09.987 07:11:43 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:09.987 07:11:43 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:09.987 07:11:43 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:09.987 07:11:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:09.987 07:11:43 -- app/cmdline.sh@26 -- # sort 00:11:09.987 07:11:43 -- common/autotest_common.sh@10 -- # set +x 00:11:09.987 07:11:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:09.987 07:11:43 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:09.987 07:11:43 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:09.987 07:11:43 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:09.987 07:11:43 -- common/autotest_common.sh@638 -- # local es=0 00:11:09.987 07:11:43 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:09.987 07:11:43 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.987 07:11:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:09.987 07:11:43 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.987 07:11:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:09.987 07:11:43 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.987 07:11:43 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:09.987 07:11:43 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:09.987 07:11:43 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:09.987 07:11:43 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:10.246 request: 00:11:10.246 { 00:11:10.246 "method": "env_dpdk_get_mem_stats", 00:11:10.246 "req_id": 1 00:11:10.246 } 00:11:10.246 Got JSON-RPC error response 00:11:10.246 response: 00:11:10.246 { 00:11:10.246 "code": -32601, 00:11:10.246 "message": "Method not found" 00:11:10.246 } 00:11:10.246 07:11:43 -- common/autotest_common.sh@641 -- # es=1 00:11:10.246 07:11:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:10.246 07:11:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:10.246 07:11:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:10.246 07:11:43 -- app/cmdline.sh@1 -- # killprocess 112182 00:11:10.246 07:11:43 -- common/autotest_common.sh@924 -- # '[' -z 112182 ']' 00:11:10.246 07:11:43 -- common/autotest_common.sh@928 -- # kill -0 112182 00:11:10.246 07:11:43 -- common/autotest_common.sh@929 -- # uname 00:11:10.246 07:11:43 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:10.246 07:11:43 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 112182 00:11:10.505 07:11:43 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:11:10.506 07:11:43 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:11:10.506 07:11:43 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 112182' 00:11:10.506 killing process with pid 112182 00:11:10.506 07:11:43 -- common/autotest_common.sh@943 -- # kill 112182 00:11:10.506 07:11:43 -- common/autotest_common.sh@948 -- # wait 112182 00:11:12.456 ************************************ 00:11:12.456 END TEST app_cmdline 00:11:12.456 ************************************ 00:11:12.456 00:11:12.456 real 0m4.183s 00:11:12.456 user 0m4.818s 00:11:12.456 sys 0m0.548s 00:11:12.456 07:11:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:12.456 07:11:45 -- common/autotest_common.sh@10 -- # set +x 00:11:12.456 07:11:45 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:12.456 07:11:45 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:11:12.456 07:11:45 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:12.456 07:11:45 -- common/autotest_common.sh@10 -- # set +x 00:11:12.456 ************************************ 00:11:12.456 START TEST version 00:11:12.456 ************************************ 00:11:12.456 07:11:45 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:11:12.456 * Looking for test storage... 00:11:12.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:11:12.456 07:11:45 -- app/version.sh@17 -- # get_header_version major 00:11:12.456 07:11:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:12.456 07:11:45 -- app/version.sh@14 -- # cut -f2 00:11:12.456 07:11:45 -- app/version.sh@14 -- # tr -d '"' 00:11:12.456 07:11:45 -- app/version.sh@17 -- # major=24 00:11:12.456 07:11:45 -- app/version.sh@18 -- # get_header_version minor 00:11:12.456 07:11:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:12.456 07:11:45 -- app/version.sh@14 -- # cut -f2 00:11:12.456 07:11:45 -- app/version.sh@14 -- # tr -d '"' 00:11:12.456 07:11:45 -- app/version.sh@18 -- # minor=5 00:11:12.456 07:11:45 -- app/version.sh@19 -- # get_header_version patch 00:11:12.456 07:11:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:12.456 07:11:45 -- app/version.sh@14 -- # cut -f2 00:11:12.456 07:11:45 -- app/version.sh@14 -- # tr -d '"' 00:11:12.456 07:11:45 -- app/version.sh@19 -- # patch=0 00:11:12.456 07:11:45 -- app/version.sh@20 -- # get_header_version suffix 00:11:12.456 07:11:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:11:12.456 07:11:45 -- app/version.sh@14 -- # cut -f2 00:11:12.456 07:11:45 -- app/version.sh@14 -- # tr -d '"' 00:11:12.456 07:11:45 -- app/version.sh@20 -- # suffix=-pre 00:11:12.456 07:11:45 -- app/version.sh@22 -- # version=24.5 00:11:12.456 07:11:45 -- app/version.sh@25 -- # (( patch != 0 )) 00:11:12.456 07:11:45 -- app/version.sh@28 -- # version=24.5rc0 00:11:12.456 07:11:45 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:12.456 07:11:45 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:12.456 07:11:45 -- app/version.sh@30 -- # py_version=24.5rc0 00:11:12.456 07:11:45 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:11:12.456 00:11:12.456 real 0m0.152s 00:11:12.456 user 0m0.103s 00:11:12.456 sys 0m0.073s 00:11:12.456 ************************************ 00:11:12.456 END TEST version 00:11:12.456 ************************************ 00:11:12.456 07:11:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:12.456 07:11:45 -- common/autotest_common.sh@10 -- # set +x 00:11:12.456 07:11:45 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:11:12.456 07:11:45 -- spdk/autotest.sh@195 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:12.456 07:11:45 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:11:12.456 07:11:45 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:12.456 07:11:45 -- common/autotest_common.sh@10 -- # set +x 00:11:12.456 ************************************ 00:11:12.456 START TEST blockdev_general 00:11:12.456 ************************************ 00:11:12.456 07:11:45 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:11:12.456 * Looking for test storage... 00:11:12.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:11:12.456 07:11:46 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:11:12.456 07:11:46 -- bdev/nbd_common.sh@6 -- # set -e 00:11:12.456 07:11:46 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:11:12.456 07:11:46 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:12.456 07:11:46 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:11:12.456 07:11:46 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:11:12.456 07:11:46 -- bdev/blockdev.sh@18 -- # : 00:11:12.456 07:11:46 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:11:12.456 07:11:46 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:11:12.456 07:11:46 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:11:12.456 07:11:46 -- bdev/blockdev.sh@672 -- # uname -s 00:11:12.456 07:11:46 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:11:12.456 07:11:46 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:11:12.456 07:11:46 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:11:12.456 07:11:46 -- bdev/blockdev.sh@681 -- # crypto_device= 00:11:12.456 07:11:46 -- bdev/blockdev.sh@682 -- # dek= 00:11:12.456 07:11:46 -- bdev/blockdev.sh@683 -- # env_ctx= 00:11:12.456 07:11:46 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:11:12.456 07:11:46 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:11:12.456 07:11:46 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:11:12.456 07:11:46 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:11:12.456 07:11:46 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:11:12.457 07:11:46 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=112384 00:11:12.457 07:11:46 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:11:12.457 07:11:46 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:12.457 07:11:46 -- bdev/blockdev.sh@47 -- # waitforlisten 112384 00:11:12.457 07:11:46 -- common/autotest_common.sh@817 -- # '[' -z 112384 ']' 00:11:12.457 07:11:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.457 07:11:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:12.457 07:11:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.457 07:11:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:12.457 07:11:46 -- common/autotest_common.sh@10 -- # set +x 00:11:12.457 [2024-02-13 07:11:46.137634] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:11:12.457 [2024-02-13 07:11:46.138038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112384 ] 00:11:12.716 [2024-02-13 07:11:46.285488] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.976 [2024-02-13 07:11:46.453699] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:12.976 [2024-02-13 07:11:46.454230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.544 07:11:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:13.544 07:11:47 -- common/autotest_common.sh@850 -- # return 0 00:11:13.544 07:11:47 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:11:13.544 07:11:47 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:11:13.544 07:11:47 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:11:13.544 07:11:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:13.544 07:11:47 -- common/autotest_common.sh@10 -- # set +x 00:11:14.112 [2024-02-13 07:11:47.776013] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:14.112 [2024-02-13 07:11:47.776399] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:14.112 00:11:14.112 [2024-02-13 07:11:47.783971] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:14.112 [2024-02-13 07:11:47.784165] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:14.112 00:11:14.372 Malloc0 00:11:14.372 Malloc1 00:11:14.372 Malloc2 00:11:14.372 Malloc3 00:11:14.372 Malloc4 00:11:14.372 Malloc5 00:11:14.636 Malloc6 00:11:14.636 Malloc7 00:11:14.636 Malloc8 00:11:14.636 Malloc9 00:11:14.636 [2024-02-13 07:11:48.193283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:14.636 [2024-02-13 07:11:48.193575] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:14.636 [2024-02-13 07:11:48.193703] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:14.636 [2024-02-13 07:11:48.193834] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:14.636 [2024-02-13 07:11:48.196524] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:14.636 [2024-02-13 07:11:48.196682] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:14.636 TestPT 00:11:14.636 07:11:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.636 07:11:48 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:11:14.636 5000+0 records in 00:11:14.636 5000+0 records out 00:11:14.636 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0274865 s, 373 MB/s 00:11:14.636 07:11:48 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:11:14.636 07:11:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.636 07:11:48 -- common/autotest_common.sh@10 -- # set +x 00:11:14.636 AIO0 00:11:14.636 07:11:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.636 07:11:48 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:11:14.636 07:11:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.636 07:11:48 -- common/autotest_common.sh@10 -- # set +x 00:11:14.636 07:11:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.636 07:11:48 -- bdev/blockdev.sh@738 -- # cat 00:11:14.636 07:11:48 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:11:14.636 07:11:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.636 07:11:48 -- common/autotest_common.sh@10 -- # set +x 00:11:14.896 07:11:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.896 07:11:48 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:11:14.896 07:11:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.896 07:11:48 -- common/autotest_common.sh@10 -- # set +x 00:11:14.896 07:11:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.896 07:11:48 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:11:14.896 07:11:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.896 07:11:48 -- common/autotest_common.sh@10 -- # set +x 00:11:14.896 07:11:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.896 07:11:48 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:11:14.896 07:11:48 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:11:14.896 07:11:48 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:11:14.896 07:11:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.896 07:11:48 -- common/autotest_common.sh@10 -- # set +x 00:11:14.896 07:11:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.896 07:11:48 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:11:14.896 07:11:48 -- bdev/blockdev.sh@747 -- # jq -r .name 00:11:14.897 07:11:48 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d5d8eff6-af25-4b58-8523-58307d53879c"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d5d8eff6-af25-4b58-8523-58307d53879c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "48d7a191-d6f7-5371-bba8-2cd34696d5bf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "48d7a191-d6f7-5371-bba8-2cd34696d5bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "cb0b4a8f-8d7e-5066-aa2b-7fc24602379e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "cb0b4a8f-8d7e-5066-aa2b-7fc24602379e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "2b30e472-15b0-562e-b17b-f0e8be65e5d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2b30e472-15b0-562e-b17b-f0e8be65e5d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "136bd053-7aa3-518f-a1b0-9b1382e0225d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "136bd053-7aa3-518f-a1b0-9b1382e0225d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "227a74ce-4963-57e2-a130-4eca26be30b6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "227a74ce-4963-57e2-a130-4eca26be30b6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "19938833-656d-5fab-8db4-603a68e23284"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "19938833-656d-5fab-8db4-603a68e23284",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "d09ba780-6652-583c-8489-d80753efe1c1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d09ba780-6652-583c-8489-d80753efe1c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "78ae8ebb-ccdf-5ac7-a449-1cc5e22307e3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "78ae8ebb-ccdf-5ac7-a449-1cc5e22307e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0d5c9065-7597-5d67-9337-4762e1383a16"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0d5c9065-7597-5d67-9337-4762e1383a16",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "2d67eb24-31d9-598b-a6c7-f9431678fa4d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2d67eb24-31d9-598b-a6c7-f9431678fa4d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "c5b21be7-ec59-5d4d-a76b-07ca73c71c6e"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c5b21be7-ec59-5d4d-a76b-07ca73c71c6e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "fc36f74b-8793-4768-a67c-037e73a1e493"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fc36f74b-8793-4768-a67c-037e73a1e493",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fc36f74b-8793-4768-a67c-037e73a1e493",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "158437f4-31db-434c-88e2-159f1f336e98",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "717aecce-4f12-40ac-9ff9-8dc3e840f940",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "147efd5e-002f-42d7-b738-7464790a0098"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "147efd5e-002f-42d7-b738-7464790a0098",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "147efd5e-002f-42d7-b738-7464790a0098",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "c355cfbe-caac-4695-b623-fe8c2d267e75",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ba4e14d7-49f7-4f1f-b903-3b4183f158ca",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "cafdea3c-0128-4ba2-a340-8263cb93b2d3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "cafdea3c-0128-4ba2-a340-8263cb93b2d3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "cafdea3c-0128-4ba2-a340-8263cb93b2d3",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "c74bae9a-8217-4ab0-8f77-2265ff8e3137",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "2fce627c-a8a9-4401-91b2-1de1a2657850",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "98b2f15c-11c7-4bbe-975e-14c0b08e853f"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "98b2f15c-11c7-4bbe-975e-14c0b08e853f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:11:14.897 07:11:48 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:11:14.897 07:11:48 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:11:14.897 07:11:48 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:11:14.897 07:11:48 -- bdev/blockdev.sh@752 -- # killprocess 112384 00:11:14.897 07:11:48 -- common/autotest_common.sh@924 -- # '[' -z 112384 ']' 00:11:14.897 07:11:48 -- common/autotest_common.sh@928 -- # kill -0 112384 00:11:14.897 07:11:48 -- common/autotest_common.sh@929 -- # uname 00:11:14.897 07:11:48 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:14.897 07:11:48 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 112384 00:11:14.897 07:11:48 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:11:14.897 07:11:48 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:11:14.897 07:11:48 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 112384' 00:11:14.897 killing process with pid 112384 00:11:14.897 07:11:48 -- common/autotest_common.sh@943 -- # kill 112384 00:11:14.897 07:11:48 -- common/autotest_common.sh@948 -- # wait 112384 00:11:17.433 07:11:51 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:11:17.433 07:11:51 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:17.433 07:11:51 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:11:17.433 07:11:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:17.433 07:11:51 -- common/autotest_common.sh@10 -- # set +x 00:11:17.433 ************************************ 00:11:17.433 START TEST bdev_hello_world 00:11:17.433 ************************************ 00:11:17.433 07:11:51 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:11:17.694 [2024-02-13 07:11:51.179884] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:11:17.694 [2024-02-13 07:11:51.180395] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112467 ] 00:11:17.694 [2024-02-13 07:11:51.346061] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:17.954 [2024-02-13 07:11:51.516878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.954 [2024-02-13 07:11:51.517247] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:11:18.213 [2024-02-13 07:11:51.849758] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:18.213 [2024-02-13 07:11:51.850179] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:18.213 [2024-02-13 07:11:51.857694] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:18.213 [2024-02-13 07:11:51.857891] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:18.213 [2024-02-13 07:11:51.865731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:18.213 [2024-02-13 07:11:51.865921] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:18.213 [2024-02-13 07:11:51.866043] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:18.472 [2024-02-13 07:11:52.057606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:18.472 [2024-02-13 07:11:52.058046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.472 [2024-02-13 07:11:52.058113] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:18.472 [2024-02-13 07:11:52.058228] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.472 [2024-02-13 07:11:52.060717] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.472 [2024-02-13 07:11:52.060904] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:18.731 [2024-02-13 07:11:52.366844] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:11:18.731 [2024-02-13 07:11:52.367331] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:11:18.731 [2024-02-13 07:11:52.367486] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:11:18.731 [2024-02-13 07:11:52.367739] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:11:18.731 [2024-02-13 07:11:52.368005] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:11:18.731 [2024-02-13 07:11:52.368313] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:11:18.731 [2024-02-13 07:11:52.368632] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:11:18.731 00:11:18.731 [2024-02-13 07:11:52.368886] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:11:18.731 [2024-02-13 07:11:52.369159] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:11:20.660 ************************************ 00:11:20.660 END TEST bdev_hello_world 00:11:20.660 ************************************ 00:11:20.660 00:11:20.660 real 0m2.985s 00:11:20.660 user 0m2.408s 00:11:20.660 sys 0m0.426s 00:11:20.660 07:11:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:20.660 07:11:54 -- common/autotest_common.sh@10 -- # set +x 00:11:20.660 07:11:54 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:11:20.660 07:11:54 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:11:20.660 07:11:54 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:20.660 07:11:54 -- common/autotest_common.sh@10 -- # set +x 00:11:20.660 ************************************ 00:11:20.660 START TEST bdev_bounds 00:11:20.660 ************************************ 00:11:20.660 07:11:54 -- common/autotest_common.sh@1102 -- # bdev_bounds '' 00:11:20.660 07:11:54 -- bdev/blockdev.sh@288 -- # bdevio_pid=112543 00:11:20.660 07:11:54 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:11:20.660 07:11:54 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:20.660 07:11:54 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 112543' 00:11:20.660 Process bdevio pid: 112543 00:11:20.660 07:11:54 -- bdev/blockdev.sh@291 -- # waitforlisten 112543 00:11:20.660 07:11:54 -- common/autotest_common.sh@817 -- # '[' -z 112543 ']' 00:11:20.660 07:11:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.660 07:11:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:20.660 07:11:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.660 07:11:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:20.660 07:11:54 -- common/autotest_common.sh@10 -- # set +x 00:11:20.660 [2024-02-13 07:11:54.229649] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:11:20.660 [2024-02-13 07:11:54.230084] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112543 ] 00:11:20.920 [2024-02-13 07:11:54.409993] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:20.920 [2024-02-13 07:11:54.583882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.920 [2024-02-13 07:11:54.584044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.920 [2024-02-13 07:11:54.584045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.920 [2024-02-13 07:11:54.584749] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:11:21.487 [2024-02-13 07:11:54.930485] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:21.487 [2024-02-13 07:11:54.930891] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:21.487 [2024-02-13 07:11:54.938452] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:21.487 [2024-02-13 07:11:54.938685] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:21.487 [2024-02-13 07:11:54.946463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:21.487 [2024-02-13 07:11:54.946660] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:21.487 [2024-02-13 07:11:54.946806] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:21.487 [2024-02-13 07:11:55.154105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:21.487 [2024-02-13 07:11:55.154547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.487 [2024-02-13 07:11:55.154774] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:21.487 [2024-02-13 07:11:55.154899] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.487 [2024-02-13 07:11:55.157826] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.487 [2024-02-13 07:11:55.158027] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:22.423 07:11:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:22.423 07:11:55 -- common/autotest_common.sh@850 -- # return 0 00:11:22.423 07:11:55 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:11:22.423 I/O targets: 00:11:22.423 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:11:22.423 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:11:22.423 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:11:22.423 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:11:22.423 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:11:22.423 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:11:22.423 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:11:22.423 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:11:22.423 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:11:22.423 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:11:22.423 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:11:22.423 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:11:22.423 raid0: 131072 blocks of 512 bytes (64 MiB) 00:11:22.423 concat0: 131072 blocks of 512 bytes (64 MiB) 00:11:22.423 raid1: 65536 blocks of 512 bytes (32 MiB) 00:11:22.423 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:11:22.423 00:11:22.423 00:11:22.423 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.423 http://cunit.sourceforge.net/ 00:11:22.423 00:11:22.423 00:11:22.423 Suite: bdevio tests on: AIO0 00:11:22.423 Test: blockdev write read block ...passed 00:11:22.423 Test: blockdev write zeroes read block ...passed 00:11:22.423 Test: blockdev write zeroes read no split ...passed 00:11:22.423 Test: blockdev write zeroes read split ...passed 00:11:22.423 Test: blockdev write zeroes read split partial ...passed 00:11:22.423 Test: blockdev reset ...passed 00:11:22.423 Test: blockdev write read 8 blocks ...passed 00:11:22.423 Test: blockdev write read size > 128k ...passed 00:11:22.423 Test: blockdev write read invalid size ...passed 00:11:22.423 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.423 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.423 Test: blockdev write read max offset ...passed 00:11:22.423 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.423 Test: blockdev writev readv 8 blocks ...passed 00:11:22.423 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.423 Test: blockdev writev readv block ...passed 00:11:22.423 Test: blockdev writev readv size > 128k ...passed 00:11:22.423 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.423 Test: blockdev comparev and writev ...passed 00:11:22.423 Test: blockdev nvme passthru rw ...passed 00:11:22.423 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.423 Test: blockdev nvme admin passthru ...passed 00:11:22.423 Test: blockdev copy ...passed 00:11:22.423 Suite: bdevio tests on: raid1 00:11:22.423 Test: blockdev write read block ...passed 00:11:22.423 Test: blockdev write zeroes read block ...passed 00:11:22.423 Test: blockdev write zeroes read no split ...passed 00:11:22.423 Test: blockdev write zeroes read split ...passed 00:11:22.423 Test: blockdev write zeroes read split partial ...passed 00:11:22.423 Test: blockdev reset ...passed 00:11:22.423 Test: blockdev write read 8 blocks ...passed 00:11:22.423 Test: blockdev write read size > 128k ...passed 00:11:22.423 Test: blockdev write read invalid size ...passed 00:11:22.423 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.423 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.423 Test: blockdev write read max offset ...passed 00:11:22.423 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.423 Test: blockdev writev readv 8 blocks ...passed 00:11:22.423 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.423 Test: blockdev writev readv block ...passed 00:11:22.423 Test: blockdev writev readv size > 128k ...passed 00:11:22.423 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.423 Test: blockdev comparev and writev ...passed 00:11:22.423 Test: blockdev nvme passthru rw ...passed 00:11:22.423 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.423 Test: blockdev nvme admin passthru ...passed 00:11:22.423 Test: blockdev copy ...passed 00:11:22.423 Suite: bdevio tests on: concat0 00:11:22.423 Test: blockdev write read block ...passed 00:11:22.423 Test: blockdev write zeroes read block ...passed 00:11:22.423 Test: blockdev write zeroes read no split ...passed 00:11:22.423 Test: blockdev write zeroes read split ...passed 00:11:22.423 Test: blockdev write zeroes read split partial ...passed 00:11:22.423 Test: blockdev reset ...passed 00:11:22.423 Test: blockdev write read 8 blocks ...passed 00:11:22.423 Test: blockdev write read size > 128k ...passed 00:11:22.423 Test: blockdev write read invalid size ...passed 00:11:22.423 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.423 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.423 Test: blockdev write read max offset ...passed 00:11:22.423 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.423 Test: blockdev writev readv 8 blocks ...passed 00:11:22.423 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.423 Test: blockdev writev readv block ...passed 00:11:22.423 Test: blockdev writev readv size > 128k ...passed 00:11:22.423 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.423 Test: blockdev comparev and writev ...passed 00:11:22.423 Test: blockdev nvme passthru rw ...passed 00:11:22.423 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.423 Test: blockdev nvme admin passthru ...passed 00:11:22.423 Test: blockdev copy ...passed 00:11:22.423 Suite: bdevio tests on: raid0 00:11:22.423 Test: blockdev write read block ...passed 00:11:22.423 Test: blockdev write zeroes read block ...passed 00:11:22.423 Test: blockdev write zeroes read no split ...passed 00:11:22.683 Test: blockdev write zeroes read split ...passed 00:11:22.683 Test: blockdev write zeroes read split partial ...passed 00:11:22.683 Test: blockdev reset ...passed 00:11:22.683 Test: blockdev write read 8 blocks ...passed 00:11:22.683 Test: blockdev write read size > 128k ...passed 00:11:22.683 Test: blockdev write read invalid size ...passed 00:11:22.683 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.683 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.683 Test: blockdev write read max offset ...passed 00:11:22.683 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.683 Test: blockdev writev readv 8 blocks ...passed 00:11:22.683 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.683 Test: blockdev writev readv block ...passed 00:11:22.683 Test: blockdev writev readv size > 128k ...passed 00:11:22.683 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.683 Test: blockdev comparev and writev ...passed 00:11:22.683 Test: blockdev nvme passthru rw ...passed 00:11:22.683 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.683 Test: blockdev nvme admin passthru ...passed 00:11:22.683 Test: blockdev copy ...passed 00:11:22.683 Suite: bdevio tests on: TestPT 00:11:22.683 Test: blockdev write read block ...passed 00:11:22.683 Test: blockdev write zeroes read block ...passed 00:11:22.683 Test: blockdev write zeroes read no split ...passed 00:11:22.683 Test: blockdev write zeroes read split ...passed 00:11:22.683 Test: blockdev write zeroes read split partial ...passed 00:11:22.683 Test: blockdev reset ...passed 00:11:22.683 Test: blockdev write read 8 blocks ...passed 00:11:22.683 Test: blockdev write read size > 128k ...passed 00:11:22.683 Test: blockdev write read invalid size ...passed 00:11:22.683 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.683 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.683 Test: blockdev write read max offset ...passed 00:11:22.683 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.683 Test: blockdev writev readv 8 blocks ...passed 00:11:22.683 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.683 Test: blockdev writev readv block ...passed 00:11:22.683 Test: blockdev writev readv size > 128k ...passed 00:11:22.683 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.683 Test: blockdev comparev and writev ...passed 00:11:22.683 Test: blockdev nvme passthru rw ...passed 00:11:22.683 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.683 Test: blockdev nvme admin passthru ...passed 00:11:22.683 Test: blockdev copy ...passed 00:11:22.683 Suite: bdevio tests on: Malloc2p7 00:11:22.683 Test: blockdev write read block ...passed 00:11:22.683 Test: blockdev write zeroes read block ...passed 00:11:22.683 Test: blockdev write zeroes read no split ...passed 00:11:22.683 Test: blockdev write zeroes read split ...passed 00:11:22.683 Test: blockdev write zeroes read split partial ...passed 00:11:22.683 Test: blockdev reset ...passed 00:11:22.683 Test: blockdev write read 8 blocks ...passed 00:11:22.683 Test: blockdev write read size > 128k ...passed 00:11:22.683 Test: blockdev write read invalid size ...passed 00:11:22.683 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.683 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.683 Test: blockdev write read max offset ...passed 00:11:22.683 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.683 Test: blockdev writev readv 8 blocks ...passed 00:11:22.683 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.683 Test: blockdev writev readv block ...passed 00:11:22.683 Test: blockdev writev readv size > 128k ...passed 00:11:22.683 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.683 Test: blockdev comparev and writev ...passed 00:11:22.683 Test: blockdev nvme passthru rw ...passed 00:11:22.683 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.683 Test: blockdev nvme admin passthru ...passed 00:11:22.683 Test: blockdev copy ...passed 00:11:22.683 Suite: bdevio tests on: Malloc2p6 00:11:22.683 Test: blockdev write read block ...passed 00:11:22.683 Test: blockdev write zeroes read block ...passed 00:11:22.683 Test: blockdev write zeroes read no split ...passed 00:11:22.683 Test: blockdev write zeroes read split ...passed 00:11:22.683 Test: blockdev write zeroes read split partial ...passed 00:11:22.683 Test: blockdev reset ...passed 00:11:22.683 Test: blockdev write read 8 blocks ...passed 00:11:22.683 Test: blockdev write read size > 128k ...passed 00:11:22.683 Test: blockdev write read invalid size ...passed 00:11:22.683 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.683 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.683 Test: blockdev write read max offset ...passed 00:11:22.683 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.683 Test: blockdev writev readv 8 blocks ...passed 00:11:22.683 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.683 Test: blockdev writev readv block ...passed 00:11:22.683 Test: blockdev writev readv size > 128k ...passed 00:11:22.683 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.683 Test: blockdev comparev and writev ...passed 00:11:22.683 Test: blockdev nvme passthru rw ...passed 00:11:22.683 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.683 Test: blockdev nvme admin passthru ...passed 00:11:22.683 Test: blockdev copy ...passed 00:11:22.683 Suite: bdevio tests on: Malloc2p5 00:11:22.683 Test: blockdev write read block ...passed 00:11:22.683 Test: blockdev write zeroes read block ...passed 00:11:22.683 Test: blockdev write zeroes read no split ...passed 00:11:22.943 Test: blockdev write zeroes read split ...passed 00:11:22.943 Test: blockdev write zeroes read split partial ...passed 00:11:22.943 Test: blockdev reset ...passed 00:11:22.943 Test: blockdev write read 8 blocks ...passed 00:11:22.943 Test: blockdev write read size > 128k ...passed 00:11:22.943 Test: blockdev write read invalid size ...passed 00:11:22.943 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.943 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.943 Test: blockdev write read max offset ...passed 00:11:22.943 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.943 Test: blockdev writev readv 8 blocks ...passed 00:11:22.943 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.943 Test: blockdev writev readv block ...passed 00:11:22.943 Test: blockdev writev readv size > 128k ...passed 00:11:22.943 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.943 Test: blockdev comparev and writev ...passed 00:11:22.943 Test: blockdev nvme passthru rw ...passed 00:11:22.943 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.943 Test: blockdev nvme admin passthru ...passed 00:11:22.943 Test: blockdev copy ...passed 00:11:22.943 Suite: bdevio tests on: Malloc2p4 00:11:22.943 Test: blockdev write read block ...passed 00:11:22.943 Test: blockdev write zeroes read block ...passed 00:11:22.943 Test: blockdev write zeroes read no split ...passed 00:11:22.943 Test: blockdev write zeroes read split ...passed 00:11:22.943 Test: blockdev write zeroes read split partial ...passed 00:11:22.943 Test: blockdev reset ...passed 00:11:22.943 Test: blockdev write read 8 blocks ...passed 00:11:22.943 Test: blockdev write read size > 128k ...passed 00:11:22.943 Test: blockdev write read invalid size ...passed 00:11:22.943 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.943 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.943 Test: blockdev write read max offset ...passed 00:11:22.943 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.943 Test: blockdev writev readv 8 blocks ...passed 00:11:22.943 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.943 Test: blockdev writev readv block ...passed 00:11:22.943 Test: blockdev writev readv size > 128k ...passed 00:11:22.943 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.943 Test: blockdev comparev and writev ...passed 00:11:22.943 Test: blockdev nvme passthru rw ...passed 00:11:22.943 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.943 Test: blockdev nvme admin passthru ...passed 00:11:22.943 Test: blockdev copy ...passed 00:11:22.943 Suite: bdevio tests on: Malloc2p3 00:11:22.943 Test: blockdev write read block ...passed 00:11:22.943 Test: blockdev write zeroes read block ...passed 00:11:22.943 Test: blockdev write zeroes read no split ...passed 00:11:22.943 Test: blockdev write zeroes read split ...passed 00:11:22.943 Test: blockdev write zeroes read split partial ...passed 00:11:22.943 Test: blockdev reset ...passed 00:11:22.943 Test: blockdev write read 8 blocks ...passed 00:11:22.943 Test: blockdev write read size > 128k ...passed 00:11:22.943 Test: blockdev write read invalid size ...passed 00:11:22.943 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.943 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.943 Test: blockdev write read max offset ...passed 00:11:22.943 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.943 Test: blockdev writev readv 8 blocks ...passed 00:11:22.943 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.943 Test: blockdev writev readv block ...passed 00:11:22.943 Test: blockdev writev readv size > 128k ...passed 00:11:22.943 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.943 Test: blockdev comparev and writev ...passed 00:11:22.943 Test: blockdev nvme passthru rw ...passed 00:11:22.943 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.943 Test: blockdev nvme admin passthru ...passed 00:11:22.943 Test: blockdev copy ...passed 00:11:22.943 Suite: bdevio tests on: Malloc2p2 00:11:22.943 Test: blockdev write read block ...passed 00:11:22.943 Test: blockdev write zeroes read block ...passed 00:11:22.943 Test: blockdev write zeroes read no split ...passed 00:11:22.943 Test: blockdev write zeroes read split ...passed 00:11:22.943 Test: blockdev write zeroes read split partial ...passed 00:11:22.943 Test: blockdev reset ...passed 00:11:22.943 Test: blockdev write read 8 blocks ...passed 00:11:22.943 Test: blockdev write read size > 128k ...passed 00:11:22.943 Test: blockdev write read invalid size ...passed 00:11:22.943 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.943 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.943 Test: blockdev write read max offset ...passed 00:11:22.943 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.943 Test: blockdev writev readv 8 blocks ...passed 00:11:22.943 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.943 Test: blockdev writev readv block ...passed 00:11:22.943 Test: blockdev writev readv size > 128k ...passed 00:11:22.943 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.943 Test: blockdev comparev and writev ...passed 00:11:22.943 Test: blockdev nvme passthru rw ...passed 00:11:22.943 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.943 Test: blockdev nvme admin passthru ...passed 00:11:22.943 Test: blockdev copy ...passed 00:11:22.943 Suite: bdevio tests on: Malloc2p1 00:11:22.943 Test: blockdev write read block ...passed 00:11:22.943 Test: blockdev write zeroes read block ...passed 00:11:22.943 Test: blockdev write zeroes read no split ...passed 00:11:22.943 Test: blockdev write zeroes read split ...passed 00:11:22.943 Test: blockdev write zeroes read split partial ...passed 00:11:22.943 Test: blockdev reset ...passed 00:11:22.943 Test: blockdev write read 8 blocks ...passed 00:11:22.943 Test: blockdev write read size > 128k ...passed 00:11:22.943 Test: blockdev write read invalid size ...passed 00:11:22.943 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.943 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.943 Test: blockdev write read max offset ...passed 00:11:22.943 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.943 Test: blockdev writev readv 8 blocks ...passed 00:11:22.943 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.943 Test: blockdev writev readv block ...passed 00:11:22.943 Test: blockdev writev readv size > 128k ...passed 00:11:22.943 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.943 Test: blockdev comparev and writev ...passed 00:11:22.943 Test: blockdev nvme passthru rw ...passed 00:11:22.943 Test: blockdev nvme passthru vendor specific ...passed 00:11:22.943 Test: blockdev nvme admin passthru ...passed 00:11:22.944 Test: blockdev copy ...passed 00:11:22.944 Suite: bdevio tests on: Malloc2p0 00:11:22.944 Test: blockdev write read block ...passed 00:11:22.944 Test: blockdev write zeroes read block ...passed 00:11:22.944 Test: blockdev write zeroes read no split ...passed 00:11:22.944 Test: blockdev write zeroes read split ...passed 00:11:23.203 Test: blockdev write zeroes read split partial ...passed 00:11:23.203 Test: blockdev reset ...passed 00:11:23.203 Test: blockdev write read 8 blocks ...passed 00:11:23.203 Test: blockdev write read size > 128k ...passed 00:11:23.203 Test: blockdev write read invalid size ...passed 00:11:23.203 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:23.203 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:23.203 Test: blockdev write read max offset ...passed 00:11:23.203 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:23.203 Test: blockdev writev readv 8 blocks ...passed 00:11:23.203 Test: blockdev writev readv 30 x 1block ...passed 00:11:23.203 Test: blockdev writev readv block ...passed 00:11:23.203 Test: blockdev writev readv size > 128k ...passed 00:11:23.203 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:23.203 Test: blockdev comparev and writev ...passed 00:11:23.203 Test: blockdev nvme passthru rw ...passed 00:11:23.203 Test: blockdev nvme passthru vendor specific ...passed 00:11:23.203 Test: blockdev nvme admin passthru ...passed 00:11:23.203 Test: blockdev copy ...passed 00:11:23.203 Suite: bdevio tests on: Malloc1p1 00:11:23.203 Test: blockdev write read block ...passed 00:11:23.203 Test: blockdev write zeroes read block ...passed 00:11:23.203 Test: blockdev write zeroes read no split ...passed 00:11:23.203 Test: blockdev write zeroes read split ...passed 00:11:23.203 Test: blockdev write zeroes read split partial ...passed 00:11:23.203 Test: blockdev reset ...passed 00:11:23.203 Test: blockdev write read 8 blocks ...passed 00:11:23.203 Test: blockdev write read size > 128k ...passed 00:11:23.203 Test: blockdev write read invalid size ...passed 00:11:23.203 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:23.203 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:23.203 Test: blockdev write read max offset ...passed 00:11:23.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:23.204 Test: blockdev writev readv 8 blocks ...passed 00:11:23.204 Test: blockdev writev readv 30 x 1block ...passed 00:11:23.204 Test: blockdev writev readv block ...passed 00:11:23.204 Test: blockdev writev readv size > 128k ...passed 00:11:23.204 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:23.204 Test: blockdev comparev and writev ...passed 00:11:23.204 Test: blockdev nvme passthru rw ...passed 00:11:23.204 Test: blockdev nvme passthru vendor specific ...passed 00:11:23.204 Test: blockdev nvme admin passthru ...passed 00:11:23.204 Test: blockdev copy ...passed 00:11:23.204 Suite: bdevio tests on: Malloc1p0 00:11:23.204 Test: blockdev write read block ...passed 00:11:23.204 Test: blockdev write zeroes read block ...passed 00:11:23.204 Test: blockdev write zeroes read no split ...passed 00:11:23.204 Test: blockdev write zeroes read split ...passed 00:11:23.204 Test: blockdev write zeroes read split partial ...passed 00:11:23.204 Test: blockdev reset ...passed 00:11:23.204 Test: blockdev write read 8 blocks ...passed 00:11:23.204 Test: blockdev write read size > 128k ...passed 00:11:23.204 Test: blockdev write read invalid size ...passed 00:11:23.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:23.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:23.204 Test: blockdev write read max offset ...passed 00:11:23.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:23.204 Test: blockdev writev readv 8 blocks ...passed 00:11:23.204 Test: blockdev writev readv 30 x 1block ...passed 00:11:23.204 Test: blockdev writev readv block ...passed 00:11:23.204 Test: blockdev writev readv size > 128k ...passed 00:11:23.204 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:23.204 Test: blockdev comparev and writev ...passed 00:11:23.204 Test: blockdev nvme passthru rw ...passed 00:11:23.204 Test: blockdev nvme passthru vendor specific ...passed 00:11:23.204 Test: blockdev nvme admin passthru ...passed 00:11:23.204 Test: blockdev copy ...passed 00:11:23.204 Suite: bdevio tests on: Malloc0 00:11:23.204 Test: blockdev write read block ...passed 00:11:23.204 Test: blockdev write zeroes read block ...passed 00:11:23.204 Test: blockdev write zeroes read no split ...passed 00:11:23.204 Test: blockdev write zeroes read split ...passed 00:11:23.204 Test: blockdev write zeroes read split partial ...passed 00:11:23.204 Test: blockdev reset ...passed 00:11:23.204 Test: blockdev write read 8 blocks ...passed 00:11:23.204 Test: blockdev write read size > 128k ...passed 00:11:23.204 Test: blockdev write read invalid size ...passed 00:11:23.204 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:23.204 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:23.204 Test: blockdev write read max offset ...passed 00:11:23.204 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:23.204 Test: blockdev writev readv 8 blocks ...passed 00:11:23.204 Test: blockdev writev readv 30 x 1block ...passed 00:11:23.204 Test: blockdev writev readv block ...passed 00:11:23.204 Test: blockdev writev readv size > 128k ...passed 00:11:23.204 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:23.204 Test: blockdev comparev and writev ...passed 00:11:23.204 Test: blockdev nvme passthru rw ...passed 00:11:23.204 Test: blockdev nvme passthru vendor specific ...passed 00:11:23.204 Test: blockdev nvme admin passthru ...passed 00:11:23.204 Test: blockdev copy ...passed 00:11:23.204 00:11:23.204 Run Summary: Type Total Ran Passed Failed Inactive 00:11:23.204 suites 16 16 n/a 0 0 00:11:23.204 tests 368 368 368 0 0 00:11:23.204 asserts 2224 2224 2224 0 n/a 00:11:23.204 00:11:23.204 Elapsed time = 2.510 seconds 00:11:23.204 0 00:11:23.204 07:11:56 -- bdev/blockdev.sh@293 -- # killprocess 112543 00:11:23.204 07:11:56 -- common/autotest_common.sh@924 -- # '[' -z 112543 ']' 00:11:23.204 07:11:56 -- common/autotest_common.sh@928 -- # kill -0 112543 00:11:23.204 07:11:56 -- common/autotest_common.sh@929 -- # uname 00:11:23.204 07:11:56 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:23.204 07:11:56 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 112543 00:11:23.463 killing process with pid 112543 00:11:23.463 07:11:56 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:11:23.463 07:11:56 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:11:23.463 07:11:56 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 112543' 00:11:23.463 07:11:56 -- common/autotest_common.sh@943 -- # kill 112543 00:11:23.463 07:11:56 -- common/autotest_common.sh@948 -- # wait 112543 00:11:23.463 [2024-02-13 07:11:56.894429] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:11:24.840 ************************************ 00:11:24.840 END TEST bdev_bounds 00:11:24.840 ************************************ 00:11:24.840 07:11:58 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:11:24.840 00:11:24.840 real 0m4.352s 00:11:24.840 user 0m11.193s 00:11:24.840 sys 0m0.605s 00:11:24.840 07:11:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:24.840 07:11:58 -- common/autotest_common.sh@10 -- # set +x 00:11:25.099 07:11:58 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:25.099 07:11:58 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:11:25.099 07:11:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:25.099 07:11:58 -- common/autotest_common.sh@10 -- # set +x 00:11:25.099 ************************************ 00:11:25.099 START TEST bdev_nbd 00:11:25.099 ************************************ 00:11:25.099 07:11:58 -- common/autotest_common.sh@1102 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:11:25.099 07:11:58 -- bdev/blockdev.sh@298 -- # uname -s 00:11:25.099 07:11:58 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:11:25.099 07:11:58 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:25.099 07:11:58 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:25.099 07:11:58 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:11:25.099 07:11:58 -- bdev/blockdev.sh@302 -- # local bdev_all 00:11:25.099 07:11:58 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:11:25.099 07:11:58 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:11:25.099 07:11:58 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:11:25.099 07:11:58 -- bdev/blockdev.sh@309 -- # local nbd_all 00:11:25.099 07:11:58 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:11:25.099 07:11:58 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:11:25.099 07:11:58 -- bdev/blockdev.sh@312 -- # local nbd_list 00:11:25.099 07:11:58 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:11:25.099 07:11:58 -- bdev/blockdev.sh@313 -- # local bdev_list 00:11:25.099 07:11:58 -- bdev/blockdev.sh@316 -- # nbd_pid=112639 00:11:25.099 07:11:58 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:11:25.099 07:11:58 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:11:25.099 07:11:58 -- bdev/blockdev.sh@318 -- # waitforlisten 112639 /var/tmp/spdk-nbd.sock 00:11:25.099 07:11:58 -- common/autotest_common.sh@817 -- # '[' -z 112639 ']' 00:11:25.099 07:11:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:11:25.099 07:11:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:25.099 07:11:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:11:25.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:11:25.099 07:11:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:25.099 07:11:58 -- common/autotest_common.sh@10 -- # set +x 00:11:25.099 [2024-02-13 07:11:58.639865] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:11:25.099 [2024-02-13 07:11:58.641089] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.357 [2024-02-13 07:11:58.806795] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.357 [2024-02-13 07:11:58.967960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.357 [2024-02-13 07:11:58.968360] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:11:25.925 [2024-02-13 07:11:59.309316] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:25.925 [2024-02-13 07:11:59.309605] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:11:25.925 [2024-02-13 07:11:59.317297] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:25.925 [2024-02-13 07:11:59.317479] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:11:25.925 [2024-02-13 07:11:59.325278] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:25.925 [2024-02-13 07:11:59.325392] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:11:25.925 [2024-02-13 07:11:59.325473] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:11:25.925 [2024-02-13 07:11:59.533986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:11:25.925 [2024-02-13 07:11:59.534380] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.925 [2024-02-13 07:11:59.534547] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:25.925 [2024-02-13 07:11:59.534702] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.925 [2024-02-13 07:11:59.537668] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.925 [2024-02-13 07:11:59.537869] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:11:26.862 07:12:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:26.862 07:12:00 -- common/autotest_common.sh@850 -- # return 0 00:11:26.862 07:12:00 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@24 -- # local i 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:11:26.862 07:12:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:11:26.862 07:12:00 -- common/autotest_common.sh@855 -- # local i 00:11:26.862 07:12:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:26.862 07:12:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:26.862 07:12:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:11:26.862 07:12:00 -- common/autotest_common.sh@859 -- # break 00:11:26.862 07:12:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:26.862 07:12:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:26.862 07:12:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:26.862 1+0 records in 00:11:26.862 1+0 records out 00:11:26.862 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440288 s, 9.3 MB/s 00:11:26.862 07:12:00 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.862 07:12:00 -- common/autotest_common.sh@872 -- # size=4096 00:11:26.862 07:12:00 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:26.862 07:12:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:26.862 07:12:00 -- common/autotest_common.sh@875 -- # return 0 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:26.862 07:12:00 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:11:27.120 07:12:00 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:11:27.120 07:12:00 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:11:27.121 07:12:00 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:11:27.121 07:12:00 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:11:27.121 07:12:00 -- common/autotest_common.sh@855 -- # local i 00:11:27.121 07:12:00 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:27.121 07:12:00 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:27.121 07:12:00 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:11:27.121 07:12:00 -- common/autotest_common.sh@859 -- # break 00:11:27.121 07:12:00 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:27.121 07:12:00 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:27.121 07:12:00 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.121 1+0 records in 00:11:27.121 1+0 records out 00:11:27.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434765 s, 9.4 MB/s 00:11:27.121 07:12:00 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.121 07:12:00 -- common/autotest_common.sh@872 -- # size=4096 00:11:27.121 07:12:00 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.121 07:12:00 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:27.121 07:12:00 -- common/autotest_common.sh@875 -- # return 0 00:11:27.121 07:12:00 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:27.121 07:12:00 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:27.121 07:12:00 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:11:27.379 07:12:01 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:11:27.379 07:12:01 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:11:27.379 07:12:01 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:11:27.379 07:12:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:11:27.379 07:12:01 -- common/autotest_common.sh@855 -- # local i 00:11:27.379 07:12:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:27.379 07:12:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:27.379 07:12:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:11:27.379 07:12:01 -- common/autotest_common.sh@859 -- # break 00:11:27.379 07:12:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:27.379 07:12:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:27.379 07:12:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.379 1+0 records in 00:11:27.379 1+0 records out 00:11:27.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467228 s, 8.8 MB/s 00:11:27.379 07:12:01 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.638 07:12:01 -- common/autotest_common.sh@872 -- # size=4096 00:11:27.638 07:12:01 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.638 07:12:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:27.638 07:12:01 -- common/autotest_common.sh@875 -- # return 0 00:11:27.638 07:12:01 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:27.638 07:12:01 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:27.638 07:12:01 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:11:27.897 07:12:01 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:11:27.897 07:12:01 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:11:27.897 07:12:01 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:11:27.897 07:12:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:11:27.897 07:12:01 -- common/autotest_common.sh@855 -- # local i 00:11:27.897 07:12:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:27.897 07:12:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:27.897 07:12:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:11:27.897 07:12:01 -- common/autotest_common.sh@859 -- # break 00:11:27.897 07:12:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:27.897 07:12:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:27.897 07:12:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:27.897 1+0 records in 00:11:27.897 1+0 records out 00:11:27.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582009 s, 7.0 MB/s 00:11:27.897 07:12:01 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.897 07:12:01 -- common/autotest_common.sh@872 -- # size=4096 00:11:27.897 07:12:01 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:27.897 07:12:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:27.897 07:12:01 -- common/autotest_common.sh@875 -- # return 0 00:11:27.897 07:12:01 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:27.897 07:12:01 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:27.897 07:12:01 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:11:28.156 07:12:01 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:11:28.156 07:12:01 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:11:28.156 07:12:01 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:11:28.156 07:12:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:11:28.156 07:12:01 -- common/autotest_common.sh@855 -- # local i 00:11:28.156 07:12:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:28.156 07:12:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:28.156 07:12:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:11:28.156 07:12:01 -- common/autotest_common.sh@859 -- # break 00:11:28.156 07:12:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:28.156 07:12:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:28.156 07:12:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.156 1+0 records in 00:11:28.156 1+0 records out 00:11:28.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619341 s, 6.6 MB/s 00:11:28.156 07:12:01 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.156 07:12:01 -- common/autotest_common.sh@872 -- # size=4096 00:11:28.156 07:12:01 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.156 07:12:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:28.156 07:12:01 -- common/autotest_common.sh@875 -- # return 0 00:11:28.156 07:12:01 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:28.156 07:12:01 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:28.156 07:12:01 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:11:28.415 07:12:01 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:11:28.415 07:12:01 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:11:28.415 07:12:01 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:11:28.415 07:12:01 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:11:28.415 07:12:01 -- common/autotest_common.sh@855 -- # local i 00:11:28.415 07:12:01 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:28.415 07:12:01 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:28.415 07:12:01 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:11:28.415 07:12:01 -- common/autotest_common.sh@859 -- # break 00:11:28.415 07:12:01 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:28.415 07:12:01 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:28.415 07:12:01 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.415 1+0 records in 00:11:28.415 1+0 records out 00:11:28.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679486 s, 6.0 MB/s 00:11:28.415 07:12:01 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.415 07:12:01 -- common/autotest_common.sh@872 -- # size=4096 00:11:28.415 07:12:01 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.415 07:12:01 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:28.415 07:12:01 -- common/autotest_common.sh@875 -- # return 0 00:11:28.415 07:12:01 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:28.415 07:12:01 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:28.415 07:12:01 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:11:28.674 07:12:02 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:11:28.674 07:12:02 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:11:28.674 07:12:02 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:11:28.674 07:12:02 -- common/autotest_common.sh@854 -- # local nbd_name=nbd6 00:11:28.674 07:12:02 -- common/autotest_common.sh@855 -- # local i 00:11:28.674 07:12:02 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:28.674 07:12:02 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:28.674 07:12:02 -- common/autotest_common.sh@858 -- # grep -q -w nbd6 /proc/partitions 00:11:28.674 07:12:02 -- common/autotest_common.sh@859 -- # break 00:11:28.674 07:12:02 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:28.674 07:12:02 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:28.674 07:12:02 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.674 1+0 records in 00:11:28.674 1+0 records out 00:11:28.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522471 s, 7.8 MB/s 00:11:28.674 07:12:02 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.674 07:12:02 -- common/autotest_common.sh@872 -- # size=4096 00:11:28.674 07:12:02 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.674 07:12:02 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:28.674 07:12:02 -- common/autotest_common.sh@875 -- # return 0 00:11:28.674 07:12:02 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:28.674 07:12:02 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:28.674 07:12:02 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:11:28.933 07:12:02 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:11:28.933 07:12:02 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:11:28.933 07:12:02 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:11:28.933 07:12:02 -- common/autotest_common.sh@854 -- # local nbd_name=nbd7 00:11:28.933 07:12:02 -- common/autotest_common.sh@855 -- # local i 00:11:28.933 07:12:02 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:28.933 07:12:02 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:28.933 07:12:02 -- common/autotest_common.sh@858 -- # grep -q -w nbd7 /proc/partitions 00:11:28.933 07:12:02 -- common/autotest_common.sh@859 -- # break 00:11:28.933 07:12:02 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:28.933 07:12:02 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:28.933 07:12:02 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:28.933 1+0 records in 00:11:28.933 1+0 records out 00:11:28.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00061119 s, 6.7 MB/s 00:11:28.933 07:12:02 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.933 07:12:02 -- common/autotest_common.sh@872 -- # size=4096 00:11:28.933 07:12:02 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:28.933 07:12:02 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:28.933 07:12:02 -- common/autotest_common.sh@875 -- # return 0 00:11:28.933 07:12:02 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:28.933 07:12:02 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:28.933 07:12:02 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:11:29.192 07:12:02 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:11:29.192 07:12:02 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:11:29.192 07:12:02 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:11:29.192 07:12:02 -- common/autotest_common.sh@854 -- # local nbd_name=nbd8 00:11:29.192 07:12:02 -- common/autotest_common.sh@855 -- # local i 00:11:29.192 07:12:02 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:29.192 07:12:02 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:29.192 07:12:02 -- common/autotest_common.sh@858 -- # grep -q -w nbd8 /proc/partitions 00:11:29.192 07:12:02 -- common/autotest_common.sh@859 -- # break 00:11:29.192 07:12:02 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:29.192 07:12:02 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:29.192 07:12:02 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.192 1+0 records in 00:11:29.192 1+0 records out 00:11:29.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063149 s, 6.5 MB/s 00:11:29.192 07:12:02 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.192 07:12:02 -- common/autotest_common.sh@872 -- # size=4096 00:11:29.192 07:12:02 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.192 07:12:02 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:29.192 07:12:02 -- common/autotest_common.sh@875 -- # return 0 00:11:29.192 07:12:02 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:29.192 07:12:02 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:29.192 07:12:02 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:11:29.451 07:12:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:11:29.451 07:12:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:11:29.451 07:12:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:11:29.451 07:12:03 -- common/autotest_common.sh@854 -- # local nbd_name=nbd9 00:11:29.451 07:12:03 -- common/autotest_common.sh@855 -- # local i 00:11:29.451 07:12:03 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:29.451 07:12:03 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:29.451 07:12:03 -- common/autotest_common.sh@858 -- # grep -q -w nbd9 /proc/partitions 00:11:29.451 07:12:03 -- common/autotest_common.sh@859 -- # break 00:11:29.451 07:12:03 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:29.451 07:12:03 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:29.451 07:12:03 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.451 1+0 records in 00:11:29.451 1+0 records out 00:11:29.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706934 s, 5.8 MB/s 00:11:29.451 07:12:03 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.451 07:12:03 -- common/autotest_common.sh@872 -- # size=4096 00:11:29.451 07:12:03 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.451 07:12:03 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:29.451 07:12:03 -- common/autotest_common.sh@875 -- # return 0 00:11:29.451 07:12:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:29.451 07:12:03 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:29.451 07:12:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:11:29.710 07:12:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:11:29.710 07:12:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:11:29.710 07:12:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:11:29.710 07:12:03 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:11:29.710 07:12:03 -- common/autotest_common.sh@855 -- # local i 00:11:29.710 07:12:03 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:29.710 07:12:03 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:29.710 07:12:03 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:11:29.710 07:12:03 -- common/autotest_common.sh@859 -- # break 00:11:29.710 07:12:03 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:29.710 07:12:03 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:29.710 07:12:03 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:29.710 1+0 records in 00:11:29.710 1+0 records out 00:11:29.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000769727 s, 5.3 MB/s 00:11:29.710 07:12:03 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.710 07:12:03 -- common/autotest_common.sh@872 -- # size=4096 00:11:29.710 07:12:03 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:29.710 07:12:03 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:29.710 07:12:03 -- common/autotest_common.sh@875 -- # return 0 00:11:29.710 07:12:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:29.710 07:12:03 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:29.710 07:12:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:11:30.279 07:12:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:11:30.279 07:12:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:11:30.279 07:12:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:11:30.279 07:12:03 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:11:30.279 07:12:03 -- common/autotest_common.sh@855 -- # local i 00:11:30.279 07:12:03 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:30.279 07:12:03 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:30.279 07:12:03 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:11:30.279 07:12:03 -- common/autotest_common.sh@859 -- # break 00:11:30.279 07:12:03 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:30.279 07:12:03 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:30.279 07:12:03 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.279 1+0 records in 00:11:30.279 1+0 records out 00:11:30.279 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000819701 s, 5.0 MB/s 00:11:30.279 07:12:03 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.279 07:12:03 -- common/autotest_common.sh@872 -- # size=4096 00:11:30.279 07:12:03 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.279 07:12:03 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:30.279 07:12:03 -- common/autotest_common.sh@875 -- # return 0 00:11:30.279 07:12:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:30.279 07:12:03 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:30.279 07:12:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:11:30.279 07:12:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:11:30.539 07:12:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:11:30.539 07:12:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:11:30.539 07:12:03 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:11:30.539 07:12:03 -- common/autotest_common.sh@855 -- # local i 00:11:30.539 07:12:03 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:30.539 07:12:03 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:30.539 07:12:03 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:11:30.539 07:12:03 -- common/autotest_common.sh@859 -- # break 00:11:30.539 07:12:03 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:30.539 07:12:03 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:30.539 07:12:03 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.539 1+0 records in 00:11:30.539 1+0 records out 00:11:30.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000801605 s, 5.1 MB/s 00:11:30.539 07:12:03 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.539 07:12:03 -- common/autotest_common.sh@872 -- # size=4096 00:11:30.539 07:12:03 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.539 07:12:03 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:30.539 07:12:03 -- common/autotest_common.sh@875 -- # return 0 00:11:30.539 07:12:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:30.539 07:12:03 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:30.539 07:12:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:11:30.539 07:12:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:11:30.539 07:12:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:11:30.539 07:12:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:11:30.539 07:12:04 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:11:30.539 07:12:04 -- common/autotest_common.sh@855 -- # local i 00:11:30.539 07:12:04 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:30.539 07:12:04 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:30.539 07:12:04 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:11:30.539 07:12:04 -- common/autotest_common.sh@859 -- # break 00:11:30.539 07:12:04 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:30.539 07:12:04 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:30.539 07:12:04 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:30.798 1+0 records in 00:11:30.798 1+0 records out 00:11:30.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000764438 s, 5.4 MB/s 00:11:30.798 07:12:04 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.798 07:12:04 -- common/autotest_common.sh@872 -- # size=4096 00:11:30.798 07:12:04 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:30.798 07:12:04 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:30.798 07:12:04 -- common/autotest_common.sh@875 -- # return 0 00:11:30.798 07:12:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:30.798 07:12:04 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:30.798 07:12:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:11:31.058 07:12:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:11:31.058 07:12:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:11:31.058 07:12:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:11:31.058 07:12:04 -- common/autotest_common.sh@854 -- # local nbd_name=nbd14 00:11:31.058 07:12:04 -- common/autotest_common.sh@855 -- # local i 00:11:31.058 07:12:04 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:31.058 07:12:04 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:31.058 07:12:04 -- common/autotest_common.sh@858 -- # grep -q -w nbd14 /proc/partitions 00:11:31.058 07:12:04 -- common/autotest_common.sh@859 -- # break 00:11:31.058 07:12:04 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:31.058 07:12:04 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:31.058 07:12:04 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.058 1+0 records in 00:11:31.058 1+0 records out 00:11:31.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000874409 s, 4.7 MB/s 00:11:31.058 07:12:04 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.058 07:12:04 -- common/autotest_common.sh@872 -- # size=4096 00:11:31.058 07:12:04 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.058 07:12:04 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:31.058 07:12:04 -- common/autotest_common.sh@875 -- # return 0 00:11:31.058 07:12:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:31.058 07:12:04 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:31.058 07:12:04 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:11:31.317 07:12:04 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:11:31.317 07:12:04 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:11:31.317 07:12:04 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:11:31.317 07:12:04 -- common/autotest_common.sh@854 -- # local nbd_name=nbd15 00:11:31.317 07:12:04 -- common/autotest_common.sh@855 -- # local i 00:11:31.317 07:12:04 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:31.317 07:12:04 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:31.317 07:12:04 -- common/autotest_common.sh@858 -- # grep -q -w nbd15 /proc/partitions 00:11:31.317 07:12:04 -- common/autotest_common.sh@859 -- # break 00:11:31.317 07:12:04 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:31.317 07:12:04 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:31.317 07:12:04 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.317 1+0 records in 00:11:31.317 1+0 records out 00:11:31.317 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125819 s, 3.3 MB/s 00:11:31.317 07:12:04 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.317 07:12:04 -- common/autotest_common.sh@872 -- # size=4096 00:11:31.317 07:12:04 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.317 07:12:04 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:31.317 07:12:04 -- common/autotest_common.sh@875 -- # return 0 00:11:31.317 07:12:04 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:11:31.317 07:12:04 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:11:31.317 07:12:04 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:31.577 07:12:05 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd0", 00:11:31.577 "bdev_name": "Malloc0" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd1", 00:11:31.577 "bdev_name": "Malloc1p0" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd2", 00:11:31.577 "bdev_name": "Malloc1p1" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd3", 00:11:31.577 "bdev_name": "Malloc2p0" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd4", 00:11:31.577 "bdev_name": "Malloc2p1" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd5", 00:11:31.577 "bdev_name": "Malloc2p2" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd6", 00:11:31.577 "bdev_name": "Malloc2p3" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd7", 00:11:31.577 "bdev_name": "Malloc2p4" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd8", 00:11:31.577 "bdev_name": "Malloc2p5" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd9", 00:11:31.577 "bdev_name": "Malloc2p6" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd10", 00:11:31.577 "bdev_name": "Malloc2p7" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd11", 00:11:31.577 "bdev_name": "TestPT" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd12", 00:11:31.577 "bdev_name": "raid0" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd13", 00:11:31.577 "bdev_name": "concat0" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd14", 00:11:31.577 "bdev_name": "raid1" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd15", 00:11:31.577 "bdev_name": "AIO0" 00:11:31.577 } 00:11:31.577 ]' 00:11:31.577 07:12:05 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:11:31.577 07:12:05 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:11:31.577 07:12:05 -- bdev/nbd_common.sh@119 -- # echo '[ 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd0", 00:11:31.577 "bdev_name": "Malloc0" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd1", 00:11:31.577 "bdev_name": "Malloc1p0" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd2", 00:11:31.577 "bdev_name": "Malloc1p1" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd3", 00:11:31.577 "bdev_name": "Malloc2p0" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd4", 00:11:31.577 "bdev_name": "Malloc2p1" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd5", 00:11:31.577 "bdev_name": "Malloc2p2" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd6", 00:11:31.577 "bdev_name": "Malloc2p3" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd7", 00:11:31.577 "bdev_name": "Malloc2p4" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd8", 00:11:31.577 "bdev_name": "Malloc2p5" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd9", 00:11:31.577 "bdev_name": "Malloc2p6" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd10", 00:11:31.577 "bdev_name": "Malloc2p7" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd11", 00:11:31.577 "bdev_name": "TestPT" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd12", 00:11:31.577 "bdev_name": "raid0" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd13", 00:11:31.577 "bdev_name": "concat0" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd14", 00:11:31.577 "bdev_name": "raid1" 00:11:31.577 }, 00:11:31.577 { 00:11:31.577 "nbd_device": "/dev/nbd15", 00:11:31.577 "bdev_name": "AIO0" 00:11:31.577 } 00:11:31.577 ]' 00:11:31.577 07:12:05 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:11:31.577 07:12:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:31.577 07:12:05 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:11:31.577 07:12:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:31.577 07:12:05 -- bdev/nbd_common.sh@51 -- # local i 00:11:31.577 07:12:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.577 07:12:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:31.836 07:12:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:31.836 07:12:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:31.836 07:12:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:31.836 07:12:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:31.836 07:12:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:31.836 07:12:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:31.836 07:12:05 -- bdev/nbd_common.sh@41 -- # break 00:11:31.836 07:12:05 -- bdev/nbd_common.sh@45 -- # return 0 00:11:31.836 07:12:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:31.836 07:12:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:32.095 07:12:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:32.095 07:12:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:32.095 07:12:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:32.095 07:12:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.095 07:12:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.095 07:12:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:32.095 07:12:05 -- bdev/nbd_common.sh@41 -- # break 00:11:32.095 07:12:05 -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.095 07:12:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.095 07:12:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:32.354 07:12:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:32.354 07:12:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:32.354 07:12:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:32.354 07:12:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.354 07:12:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.354 07:12:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:32.354 07:12:05 -- bdev/nbd_common.sh@41 -- # break 00:11:32.354 07:12:05 -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.354 07:12:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.354 07:12:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:32.613 07:12:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:32.613 07:12:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:32.613 07:12:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:32.613 07:12:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.613 07:12:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.613 07:12:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:32.613 07:12:06 -- bdev/nbd_common.sh@41 -- # break 00:11:32.613 07:12:06 -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.613 07:12:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.613 07:12:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:32.872 07:12:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:32.872 07:12:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:32.872 07:12:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:32.872 07:12:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:32.872 07:12:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:32.872 07:12:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:32.872 07:12:06 -- bdev/nbd_common.sh@41 -- # break 00:11:32.872 07:12:06 -- bdev/nbd_common.sh@45 -- # return 0 00:11:32.872 07:12:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:32.872 07:12:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:33.132 07:12:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:33.132 07:12:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:33.132 07:12:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:33.132 07:12:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.132 07:12:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.132 07:12:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:33.132 07:12:06 -- bdev/nbd_common.sh@41 -- # break 00:11:33.132 07:12:06 -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.132 07:12:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.132 07:12:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:33.391 07:12:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:33.391 07:12:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:33.391 07:12:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:33.391 07:12:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.391 07:12:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.391 07:12:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:33.391 07:12:06 -- bdev/nbd_common.sh@41 -- # break 00:11:33.391 07:12:06 -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.391 07:12:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.391 07:12:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:33.649 07:12:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:33.649 07:12:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:33.649 07:12:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:33.649 07:12:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.649 07:12:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.649 07:12:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:33.649 07:12:07 -- bdev/nbd_common.sh@41 -- # break 00:11:33.649 07:12:07 -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.649 07:12:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.649 07:12:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:33.908 07:12:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:33.908 07:12:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:33.908 07:12:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:33.908 07:12:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:33.908 07:12:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:33.908 07:12:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:33.908 07:12:07 -- bdev/nbd_common.sh@41 -- # break 00:11:33.908 07:12:07 -- bdev/nbd_common.sh@45 -- # return 0 00:11:33.908 07:12:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:33.908 07:12:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@41 -- # break 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@41 -- # break 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.168 07:12:07 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:34.469 07:12:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:34.469 07:12:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:34.469 07:12:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:34.469 07:12:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.469 07:12:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.469 07:12:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:34.469 07:12:08 -- bdev/nbd_common.sh@41 -- # break 00:11:34.469 07:12:08 -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.469 07:12:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.469 07:12:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:34.733 07:12:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:34.733 07:12:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:34.733 07:12:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:34.733 07:12:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.733 07:12:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.734 07:12:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:34.734 07:12:08 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:34.996 07:12:08 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:34.996 07:12:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.996 07:12:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:34.996 07:12:08 -- bdev/nbd_common.sh@41 -- # break 00:11:34.996 07:12:08 -- bdev/nbd_common.sh@45 -- # return 0 00:11:34.996 07:12:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:34.996 07:12:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:34.996 07:12:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:34.996 07:12:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:34.996 07:12:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:34.996 07:12:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:34.996 07:12:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:34.996 07:12:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:34.996 07:12:08 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:35.260 07:12:08 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:35.260 07:12:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.260 07:12:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:35.260 07:12:08 -- bdev/nbd_common.sh@41 -- # break 00:11:35.260 07:12:08 -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.260 07:12:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.260 07:12:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:35.518 07:12:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:35.518 07:12:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:35.518 07:12:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:35.518 07:12:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.518 07:12:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.518 07:12:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:35.518 07:12:08 -- bdev/nbd_common.sh@41 -- # break 00:11:35.518 07:12:08 -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.518 07:12:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:35.518 07:12:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:35.518 07:12:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:35.518 07:12:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:35.518 07:12:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:35.518 07:12:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:35.519 07:12:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:35.519 07:12:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:35.519 07:12:09 -- bdev/nbd_common.sh@41 -- # break 00:11:35.519 07:12:09 -- bdev/nbd_common.sh@45 -- # return 0 00:11:35.519 07:12:09 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:35.519 07:12:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:35.519 07:12:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@65 -- # true 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@65 -- # count=0 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@122 -- # count=0 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@127 -- # return 0 00:11:36.087 07:12:09 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@12 -- # local i 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:36.087 07:12:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:11:36.347 /dev/nbd0 00:11:36.347 07:12:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:36.347 07:12:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:36.347 07:12:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:11:36.347 07:12:09 -- common/autotest_common.sh@855 -- # local i 00:11:36.347 07:12:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:36.347 07:12:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:36.347 07:12:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:11:36.347 07:12:09 -- common/autotest_common.sh@859 -- # break 00:11:36.347 07:12:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:36.347 07:12:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:36.347 07:12:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.347 1+0 records in 00:11:36.347 1+0 records out 00:11:36.347 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532105 s, 7.7 MB/s 00:11:36.347 07:12:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.347 07:12:09 -- common/autotest_common.sh@872 -- # size=4096 00:11:36.347 07:12:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.347 07:12:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:36.347 07:12:09 -- common/autotest_common.sh@875 -- # return 0 00:11:36.347 07:12:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.347 07:12:09 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:36.347 07:12:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:11:36.606 /dev/nbd1 00:11:36.606 07:12:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:36.606 07:12:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:36.606 07:12:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:11:36.606 07:12:10 -- common/autotest_common.sh@855 -- # local i 00:11:36.606 07:12:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:36.606 07:12:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:36.606 07:12:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:11:36.606 07:12:10 -- common/autotest_common.sh@859 -- # break 00:11:36.606 07:12:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:36.606 07:12:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:36.606 07:12:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.606 1+0 records in 00:11:36.606 1+0 records out 00:11:36.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043134 s, 9.5 MB/s 00:11:36.606 07:12:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.606 07:12:10 -- common/autotest_common.sh@872 -- # size=4096 00:11:36.606 07:12:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.606 07:12:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:36.606 07:12:10 -- common/autotest_common.sh@875 -- # return 0 00:11:36.606 07:12:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.606 07:12:10 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:36.606 07:12:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:11:36.866 /dev/nbd10 00:11:36.866 07:12:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:11:36.866 07:12:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:11:36.866 07:12:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd10 00:11:36.866 07:12:10 -- common/autotest_common.sh@855 -- # local i 00:11:36.866 07:12:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:36.866 07:12:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:36.866 07:12:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd10 /proc/partitions 00:11:36.866 07:12:10 -- common/autotest_common.sh@859 -- # break 00:11:36.866 07:12:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:36.866 07:12:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:36.866 07:12:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:36.866 1+0 records in 00:11:36.866 1+0 records out 00:11:36.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327068 s, 12.5 MB/s 00:11:36.866 07:12:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.866 07:12:10 -- common/autotest_common.sh@872 -- # size=4096 00:11:36.866 07:12:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:36.866 07:12:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:36.866 07:12:10 -- common/autotest_common.sh@875 -- # return 0 00:11:36.866 07:12:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:36.866 07:12:10 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:36.866 07:12:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:11:37.125 /dev/nbd11 00:11:37.125 07:12:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:11:37.125 07:12:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:11:37.125 07:12:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd11 00:11:37.125 07:12:10 -- common/autotest_common.sh@855 -- # local i 00:11:37.125 07:12:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:37.125 07:12:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:37.125 07:12:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd11 /proc/partitions 00:11:37.125 07:12:10 -- common/autotest_common.sh@859 -- # break 00:11:37.125 07:12:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:37.125 07:12:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:37.125 07:12:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:37.125 1+0 records in 00:11:37.125 1+0 records out 00:11:37.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405466 s, 10.1 MB/s 00:11:37.125 07:12:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.125 07:12:10 -- common/autotest_common.sh@872 -- # size=4096 00:11:37.125 07:12:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.125 07:12:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:37.125 07:12:10 -- common/autotest_common.sh@875 -- # return 0 00:11:37.125 07:12:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:37.125 07:12:10 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:37.125 07:12:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:11:37.384 /dev/nbd12 00:11:37.384 07:12:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:11:37.384 07:12:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:11:37.384 07:12:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd12 00:11:37.384 07:12:10 -- common/autotest_common.sh@855 -- # local i 00:11:37.384 07:12:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:37.384 07:12:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:37.384 07:12:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd12 /proc/partitions 00:11:37.384 07:12:10 -- common/autotest_common.sh@859 -- # break 00:11:37.384 07:12:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:37.384 07:12:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:37.384 07:12:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:37.384 1+0 records in 00:11:37.384 1+0 records out 00:11:37.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038196 s, 10.7 MB/s 00:11:37.384 07:12:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.384 07:12:10 -- common/autotest_common.sh@872 -- # size=4096 00:11:37.384 07:12:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.384 07:12:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:37.384 07:12:10 -- common/autotest_common.sh@875 -- # return 0 00:11:37.384 07:12:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:37.384 07:12:10 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:37.384 07:12:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:11:37.644 /dev/nbd13 00:11:37.644 07:12:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:11:37.644 07:12:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:11:37.644 07:12:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd13 00:11:37.644 07:12:11 -- common/autotest_common.sh@855 -- # local i 00:11:37.644 07:12:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:37.644 07:12:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:37.644 07:12:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd13 /proc/partitions 00:11:37.644 07:12:11 -- common/autotest_common.sh@859 -- # break 00:11:37.644 07:12:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:37.644 07:12:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:37.644 07:12:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:37.644 1+0 records in 00:11:37.644 1+0 records out 00:11:37.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042495 s, 9.6 MB/s 00:11:37.644 07:12:11 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.644 07:12:11 -- common/autotest_common.sh@872 -- # size=4096 00:11:37.644 07:12:11 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.644 07:12:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:37.644 07:12:11 -- common/autotest_common.sh@875 -- # return 0 00:11:37.644 07:12:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:37.644 07:12:11 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:37.644 07:12:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:11:37.644 /dev/nbd14 00:11:37.644 07:12:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:11:37.904 07:12:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:11:37.904 07:12:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd14 00:11:37.904 07:12:11 -- common/autotest_common.sh@855 -- # local i 00:11:37.904 07:12:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:37.904 07:12:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:37.904 07:12:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd14 /proc/partitions 00:11:37.904 07:12:11 -- common/autotest_common.sh@859 -- # break 00:11:37.904 07:12:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:37.904 07:12:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:37.904 07:12:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:37.904 1+0 records in 00:11:37.904 1+0 records out 00:11:37.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000700103 s, 5.9 MB/s 00:11:37.904 07:12:11 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.904 07:12:11 -- common/autotest_common.sh@872 -- # size=4096 00:11:37.904 07:12:11 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:37.904 07:12:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:37.904 07:12:11 -- common/autotest_common.sh@875 -- # return 0 00:11:37.904 07:12:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:37.904 07:12:11 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:37.904 07:12:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:11:37.904 /dev/nbd15 00:11:38.163 07:12:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:11:38.163 07:12:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:11:38.163 07:12:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd15 00:11:38.163 07:12:11 -- common/autotest_common.sh@855 -- # local i 00:11:38.163 07:12:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:38.163 07:12:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:38.163 07:12:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd15 /proc/partitions 00:11:38.163 07:12:11 -- common/autotest_common.sh@859 -- # break 00:11:38.163 07:12:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:38.163 07:12:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:38.163 07:12:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:38.163 1+0 records in 00:11:38.163 1+0 records out 00:11:38.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000435744 s, 9.4 MB/s 00:11:38.163 07:12:11 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.163 07:12:11 -- common/autotest_common.sh@872 -- # size=4096 00:11:38.163 07:12:11 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.163 07:12:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:38.163 07:12:11 -- common/autotest_common.sh@875 -- # return 0 00:11:38.163 07:12:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:38.163 07:12:11 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:38.163 07:12:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:11:38.163 /dev/nbd2 00:11:38.163 07:12:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:11:38.163 07:12:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:11:38.163 07:12:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd2 00:11:38.163 07:12:11 -- common/autotest_common.sh@855 -- # local i 00:11:38.163 07:12:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:38.163 07:12:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:38.163 07:12:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd2 /proc/partitions 00:11:38.163 07:12:11 -- common/autotest_common.sh@859 -- # break 00:11:38.163 07:12:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:38.163 07:12:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:38.163 07:12:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:38.163 1+0 records in 00:11:38.163 1+0 records out 00:11:38.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0004601 s, 8.9 MB/s 00:11:38.163 07:12:11 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.163 07:12:11 -- common/autotest_common.sh@872 -- # size=4096 00:11:38.163 07:12:11 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.163 07:12:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:38.163 07:12:11 -- common/autotest_common.sh@875 -- # return 0 00:11:38.163 07:12:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:38.163 07:12:11 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:38.163 07:12:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:11:38.422 /dev/nbd3 00:11:38.422 07:12:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:11:38.422 07:12:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:11:38.422 07:12:12 -- common/autotest_common.sh@854 -- # local nbd_name=nbd3 00:11:38.422 07:12:12 -- common/autotest_common.sh@855 -- # local i 00:11:38.422 07:12:12 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:38.422 07:12:12 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:38.422 07:12:12 -- common/autotest_common.sh@858 -- # grep -q -w nbd3 /proc/partitions 00:11:38.422 07:12:12 -- common/autotest_common.sh@859 -- # break 00:11:38.422 07:12:12 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:38.422 07:12:12 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:38.422 07:12:12 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:38.422 1+0 records in 00:11:38.422 1+0 records out 00:11:38.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405113 s, 10.1 MB/s 00:11:38.422 07:12:12 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.422 07:12:12 -- common/autotest_common.sh@872 -- # size=4096 00:11:38.422 07:12:12 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.422 07:12:12 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:38.422 07:12:12 -- common/autotest_common.sh@875 -- # return 0 00:11:38.422 07:12:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:38.422 07:12:12 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:38.422 07:12:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:11:38.682 /dev/nbd4 00:11:38.682 07:12:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:11:38.682 07:12:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:11:38.682 07:12:12 -- common/autotest_common.sh@854 -- # local nbd_name=nbd4 00:11:38.682 07:12:12 -- common/autotest_common.sh@855 -- # local i 00:11:38.682 07:12:12 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:38.682 07:12:12 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:38.682 07:12:12 -- common/autotest_common.sh@858 -- # grep -q -w nbd4 /proc/partitions 00:11:38.682 07:12:12 -- common/autotest_common.sh@859 -- # break 00:11:38.682 07:12:12 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:38.682 07:12:12 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:38.682 07:12:12 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:38.682 1+0 records in 00:11:38.682 1+0 records out 00:11:38.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000483147 s, 8.5 MB/s 00:11:38.682 07:12:12 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.682 07:12:12 -- common/autotest_common.sh@872 -- # size=4096 00:11:38.682 07:12:12 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.682 07:12:12 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:38.682 07:12:12 -- common/autotest_common.sh@875 -- # return 0 00:11:38.682 07:12:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:38.682 07:12:12 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:38.682 07:12:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:11:38.942 /dev/nbd5 00:11:38.942 07:12:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:11:38.942 07:12:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:11:38.942 07:12:12 -- common/autotest_common.sh@854 -- # local nbd_name=nbd5 00:11:38.942 07:12:12 -- common/autotest_common.sh@855 -- # local i 00:11:38.942 07:12:12 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:38.942 07:12:12 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:38.942 07:12:12 -- common/autotest_common.sh@858 -- # grep -q -w nbd5 /proc/partitions 00:11:38.942 07:12:12 -- common/autotest_common.sh@859 -- # break 00:11:38.942 07:12:12 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:38.942 07:12:12 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:38.942 07:12:12 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:38.942 1+0 records in 00:11:38.942 1+0 records out 00:11:38.942 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437573 s, 9.4 MB/s 00:11:38.942 07:12:12 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.942 07:12:12 -- common/autotest_common.sh@872 -- # size=4096 00:11:38.942 07:12:12 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:38.942 07:12:12 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:38.942 07:12:12 -- common/autotest_common.sh@875 -- # return 0 00:11:38.942 07:12:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:38.942 07:12:12 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:38.942 07:12:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:11:39.201 /dev/nbd6 00:11:39.201 07:12:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:11:39.201 07:12:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:11:39.201 07:12:12 -- common/autotest_common.sh@854 -- # local nbd_name=nbd6 00:11:39.201 07:12:12 -- common/autotest_common.sh@855 -- # local i 00:11:39.201 07:12:12 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:39.201 07:12:12 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:39.201 07:12:12 -- common/autotest_common.sh@858 -- # grep -q -w nbd6 /proc/partitions 00:11:39.201 07:12:12 -- common/autotest_common.sh@859 -- # break 00:11:39.201 07:12:12 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:39.201 07:12:12 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:39.201 07:12:12 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:39.201 1+0 records in 00:11:39.201 1+0 records out 00:11:39.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125524 s, 3.3 MB/s 00:11:39.201 07:12:12 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.201 07:12:12 -- common/autotest_common.sh@872 -- # size=4096 00:11:39.201 07:12:12 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.201 07:12:12 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:39.201 07:12:12 -- common/autotest_common.sh@875 -- # return 0 00:11:39.201 07:12:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:39.201 07:12:12 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:39.201 07:12:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:11:39.461 /dev/nbd7 00:11:39.461 07:12:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:11:39.461 07:12:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:11:39.461 07:12:13 -- common/autotest_common.sh@854 -- # local nbd_name=nbd7 00:11:39.461 07:12:13 -- common/autotest_common.sh@855 -- # local i 00:11:39.461 07:12:13 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:39.461 07:12:13 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:39.461 07:12:13 -- common/autotest_common.sh@858 -- # grep -q -w nbd7 /proc/partitions 00:11:39.461 07:12:13 -- common/autotest_common.sh@859 -- # break 00:11:39.461 07:12:13 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:39.461 07:12:13 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:39.461 07:12:13 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:39.461 1+0 records in 00:11:39.461 1+0 records out 00:11:39.461 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000776481 s, 5.3 MB/s 00:11:39.461 07:12:13 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.461 07:12:13 -- common/autotest_common.sh@872 -- # size=4096 00:11:39.461 07:12:13 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.461 07:12:13 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:39.461 07:12:13 -- common/autotest_common.sh@875 -- # return 0 00:11:39.461 07:12:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:39.461 07:12:13 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:39.461 07:12:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:11:39.720 /dev/nbd8 00:11:39.720 07:12:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:11:39.720 07:12:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:11:39.720 07:12:13 -- common/autotest_common.sh@854 -- # local nbd_name=nbd8 00:11:39.720 07:12:13 -- common/autotest_common.sh@855 -- # local i 00:11:39.720 07:12:13 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:39.720 07:12:13 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:39.720 07:12:13 -- common/autotest_common.sh@858 -- # grep -q -w nbd8 /proc/partitions 00:11:39.720 07:12:13 -- common/autotest_common.sh@859 -- # break 00:11:39.720 07:12:13 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:39.720 07:12:13 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:39.720 07:12:13 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:39.720 1+0 records in 00:11:39.720 1+0 records out 00:11:39.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000687241 s, 6.0 MB/s 00:11:39.720 07:12:13 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.720 07:12:13 -- common/autotest_common.sh@872 -- # size=4096 00:11:39.720 07:12:13 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.720 07:12:13 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:39.720 07:12:13 -- common/autotest_common.sh@875 -- # return 0 00:11:39.720 07:12:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:39.720 07:12:13 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:39.720 07:12:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:11:39.980 /dev/nbd9 00:11:39.980 07:12:13 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:11:39.980 07:12:13 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:11:39.980 07:12:13 -- common/autotest_common.sh@854 -- # local nbd_name=nbd9 00:11:39.980 07:12:13 -- common/autotest_common.sh@855 -- # local i 00:11:39.980 07:12:13 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:11:39.980 07:12:13 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:11:39.980 07:12:13 -- common/autotest_common.sh@858 -- # grep -q -w nbd9 /proc/partitions 00:11:39.980 07:12:13 -- common/autotest_common.sh@859 -- # break 00:11:39.980 07:12:13 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:11:39.980 07:12:13 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:11:39.980 07:12:13 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:39.980 1+0 records in 00:11:39.980 1+0 records out 00:11:39.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000954489 s, 4.3 MB/s 00:11:39.980 07:12:13 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.980 07:12:13 -- common/autotest_common.sh@872 -- # size=4096 00:11:39.980 07:12:13 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:39.980 07:12:13 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:11:39.980 07:12:13 -- common/autotest_common.sh@875 -- # return 0 00:11:39.980 07:12:13 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:39.980 07:12:13 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:11:39.980 07:12:13 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:39.980 07:12:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:39.980 07:12:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd0", 00:11:40.240 "bdev_name": "Malloc0" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd1", 00:11:40.240 "bdev_name": "Malloc1p0" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd10", 00:11:40.240 "bdev_name": "Malloc1p1" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd11", 00:11:40.240 "bdev_name": "Malloc2p0" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd12", 00:11:40.240 "bdev_name": "Malloc2p1" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd13", 00:11:40.240 "bdev_name": "Malloc2p2" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd14", 00:11:40.240 "bdev_name": "Malloc2p3" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd15", 00:11:40.240 "bdev_name": "Malloc2p4" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd2", 00:11:40.240 "bdev_name": "Malloc2p5" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd3", 00:11:40.240 "bdev_name": "Malloc2p6" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd4", 00:11:40.240 "bdev_name": "Malloc2p7" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd5", 00:11:40.240 "bdev_name": "TestPT" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd6", 00:11:40.240 "bdev_name": "raid0" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd7", 00:11:40.240 "bdev_name": "concat0" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd8", 00:11:40.240 "bdev_name": "raid1" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd9", 00:11:40.240 "bdev_name": "AIO0" 00:11:40.240 } 00:11:40.240 ]' 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@64 -- # echo '[ 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd0", 00:11:40.240 "bdev_name": "Malloc0" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd1", 00:11:40.240 "bdev_name": "Malloc1p0" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd10", 00:11:40.240 "bdev_name": "Malloc1p1" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd11", 00:11:40.240 "bdev_name": "Malloc2p0" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd12", 00:11:40.240 "bdev_name": "Malloc2p1" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd13", 00:11:40.240 "bdev_name": "Malloc2p2" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd14", 00:11:40.240 "bdev_name": "Malloc2p3" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd15", 00:11:40.240 "bdev_name": "Malloc2p4" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd2", 00:11:40.240 "bdev_name": "Malloc2p5" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd3", 00:11:40.240 "bdev_name": "Malloc2p6" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd4", 00:11:40.240 "bdev_name": "Malloc2p7" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd5", 00:11:40.240 "bdev_name": "TestPT" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd6", 00:11:40.240 "bdev_name": "raid0" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd7", 00:11:40.240 "bdev_name": "concat0" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd8", 00:11:40.240 "bdev_name": "raid1" 00:11:40.240 }, 00:11:40.240 { 00:11:40.240 "nbd_device": "/dev/nbd9", 00:11:40.240 "bdev_name": "AIO0" 00:11:40.240 } 00:11:40.240 ]' 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:11:40.240 /dev/nbd1 00:11:40.240 /dev/nbd10 00:11:40.240 /dev/nbd11 00:11:40.240 /dev/nbd12 00:11:40.240 /dev/nbd13 00:11:40.240 /dev/nbd14 00:11:40.240 /dev/nbd15 00:11:40.240 /dev/nbd2 00:11:40.240 /dev/nbd3 00:11:40.240 /dev/nbd4 00:11:40.240 /dev/nbd5 00:11:40.240 /dev/nbd6 00:11:40.240 /dev/nbd7 00:11:40.240 /dev/nbd8 00:11:40.240 /dev/nbd9' 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:11:40.240 /dev/nbd1 00:11:40.240 /dev/nbd10 00:11:40.240 /dev/nbd11 00:11:40.240 /dev/nbd12 00:11:40.240 /dev/nbd13 00:11:40.240 /dev/nbd14 00:11:40.240 /dev/nbd15 00:11:40.240 /dev/nbd2 00:11:40.240 /dev/nbd3 00:11:40.240 /dev/nbd4 00:11:40.240 /dev/nbd5 00:11:40.240 /dev/nbd6 00:11:40.240 /dev/nbd7 00:11:40.240 /dev/nbd8 00:11:40.240 /dev/nbd9' 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@65 -- # count=16 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@66 -- # echo 16 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@95 -- # count=16 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@71 -- # local operation=write 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:11:40.240 256+0 records in 00:11:40.240 256+0 records out 00:11:40.240 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00794395 s, 132 MB/s 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:40.240 07:12:13 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:11:40.500 256+0 records in 00:11:40.500 256+0 records out 00:11:40.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143597 s, 7.3 MB/s 00:11:40.500 07:12:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:40.500 07:12:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:11:40.500 256+0 records in 00:11:40.500 256+0 records out 00:11:40.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.131801 s, 8.0 MB/s 00:11:40.500 07:12:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:40.500 07:12:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:11:40.759 256+0 records in 00:11:40.759 256+0 records out 00:11:40.759 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124999 s, 8.4 MB/s 00:11:40.759 07:12:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:40.759 07:12:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:11:40.759 256+0 records in 00:11:40.759 256+0 records out 00:11:40.759 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137091 s, 7.6 MB/s 00:11:40.759 07:12:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:40.759 07:12:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:11:41.018 256+0 records in 00:11:41.018 256+0 records out 00:11:41.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130116 s, 8.1 MB/s 00:11:41.018 07:12:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:41.018 07:12:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:11:41.018 256+0 records in 00:11:41.018 256+0 records out 00:11:41.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125344 s, 8.4 MB/s 00:11:41.018 07:12:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:41.018 07:12:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:11:41.277 256+0 records in 00:11:41.277 256+0 records out 00:11:41.277 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129782 s, 8.1 MB/s 00:11:41.277 07:12:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:41.277 07:12:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:11:41.277 256+0 records in 00:11:41.277 256+0 records out 00:11:41.277 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130965 s, 8.0 MB/s 00:11:41.277 07:12:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:41.277 07:12:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:11:41.536 256+0 records in 00:11:41.536 256+0 records out 00:11:41.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135268 s, 7.8 MB/s 00:11:41.536 07:12:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:41.536 07:12:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:11:41.536 256+0 records in 00:11:41.536 256+0 records out 00:11:41.536 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139269 s, 7.5 MB/s 00:11:41.536 07:12:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:41.536 07:12:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:11:41.795 256+0 records in 00:11:41.795 256+0 records out 00:11:41.795 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130491 s, 8.0 MB/s 00:11:41.795 07:12:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:41.795 07:12:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:11:42.054 256+0 records in 00:11:42.054 256+0 records out 00:11:42.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139353 s, 7.5 MB/s 00:11:42.054 07:12:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:42.054 07:12:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:11:42.054 256+0 records in 00:11:42.054 256+0 records out 00:11:42.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125597 s, 8.3 MB/s 00:11:42.054 07:12:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:42.054 07:12:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:11:42.312 256+0 records in 00:11:42.312 256+0 records out 00:11:42.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144323 s, 7.3 MB/s 00:11:42.312 07:12:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:42.312 07:12:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:11:42.312 256+0 records in 00:11:42.312 256+0 records out 00:11:42.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13567 s, 7.7 MB/s 00:11:42.312 07:12:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:11:42.312 07:12:15 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:11:42.571 256+0 records in 00:11:42.571 256+0 records out 00:11:42.571 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.203204 s, 5.2 MB/s 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@51 -- # local i 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:42.571 07:12:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:43.138 07:12:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:43.138 07:12:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:43.138 07:12:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:43.138 07:12:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.138 07:12:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.138 07:12:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:43.138 07:12:16 -- bdev/nbd_common.sh@41 -- # break 00:11:43.138 07:12:16 -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.138 07:12:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.138 07:12:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:11:43.138 07:12:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:43.138 07:12:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:43.138 07:12:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:43.139 07:12:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.139 07:12:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.139 07:12:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:43.139 07:12:16 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:43.397 07:12:16 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:43.397 07:12:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.397 07:12:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:43.397 07:12:16 -- bdev/nbd_common.sh@41 -- # break 00:11:43.397 07:12:16 -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.397 07:12:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.397 07:12:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:11:43.657 07:12:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:11:43.657 07:12:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:11:43.657 07:12:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:11:43.657 07:12:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.657 07:12:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.657 07:12:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:11:43.657 07:12:17 -- bdev/nbd_common.sh@41 -- # break 00:11:43.657 07:12:17 -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.657 07:12:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.657 07:12:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:11:43.916 07:12:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:11:43.916 07:12:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:11:43.916 07:12:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:11:43.916 07:12:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:43.916 07:12:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.916 07:12:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:43.916 07:12:17 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:43.916 07:12:17 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:43.916 07:12:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:43.916 07:12:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:11:43.916 07:12:17 -- bdev/nbd_common.sh@41 -- # break 00:11:43.916 07:12:17 -- bdev/nbd_common.sh@45 -- # return 0 00:11:43.916 07:12:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.916 07:12:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:11:44.174 07:12:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:11:44.174 07:12:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:11:44.174 07:12:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:11:44.174 07:12:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:44.174 07:12:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:44.174 07:12:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:44.174 07:12:17 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:44.433 07:12:17 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:44.433 07:12:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:44.433 07:12:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:11:44.433 07:12:17 -- bdev/nbd_common.sh@41 -- # break 00:11:44.433 07:12:17 -- bdev/nbd_common.sh@45 -- # return 0 00:11:44.433 07:12:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:44.433 07:12:17 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:11:44.692 07:12:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:11:44.692 07:12:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:11:44.692 07:12:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:11:44.692 07:12:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:44.692 07:12:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:44.692 07:12:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:11:44.692 07:12:18 -- bdev/nbd_common.sh@41 -- # break 00:11:44.692 07:12:18 -- bdev/nbd_common.sh@45 -- # return 0 00:11:44.692 07:12:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:44.692 07:12:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:11:44.951 07:12:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:11:44.951 07:12:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:11:44.951 07:12:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:11:44.951 07:12:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:44.951 07:12:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:44.951 07:12:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:11:44.951 07:12:18 -- bdev/nbd_common.sh@41 -- # break 00:11:44.951 07:12:18 -- bdev/nbd_common.sh@45 -- # return 0 00:11:44.951 07:12:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:44.951 07:12:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:11:45.210 07:12:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:11:45.210 07:12:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:11:45.210 07:12:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:11:45.210 07:12:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.210 07:12:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.210 07:12:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:45.210 07:12:18 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:45.210 07:12:18 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:45.210 07:12:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.210 07:12:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:11:45.210 07:12:18 -- bdev/nbd_common.sh@41 -- # break 00:11:45.210 07:12:18 -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.210 07:12:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.210 07:12:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:11:45.469 07:12:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:11:45.469 07:12:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:11:45.469 07:12:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:11:45.469 07:12:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.469 07:12:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.469 07:12:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:45.469 07:12:19 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:45.469 07:12:19 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:45.469 07:12:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.469 07:12:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:11:45.729 07:12:19 -- bdev/nbd_common.sh@41 -- # break 00:11:45.729 07:12:19 -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.729 07:12:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.729 07:12:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:11:45.729 07:12:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:11:45.729 07:12:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:11:45.729 07:12:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:11:45.729 07:12:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.729 07:12:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.729 07:12:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:11:45.729 07:12:19 -- bdev/nbd_common.sh@41 -- # break 00:11:45.729 07:12:19 -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.729 07:12:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.729 07:12:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:11:45.987 07:12:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:11:45.988 07:12:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:11:45.988 07:12:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:11:45.988 07:12:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:45.988 07:12:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.988 07:12:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:45.988 07:12:19 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:45.988 07:12:19 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:45.988 07:12:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:45.988 07:12:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:11:45.988 07:12:19 -- bdev/nbd_common.sh@41 -- # break 00:11:45.988 07:12:19 -- bdev/nbd_common.sh@45 -- # return 0 00:11:45.988 07:12:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:45.988 07:12:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:11:46.247 07:12:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:11:46.247 07:12:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:11:46.247 07:12:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:11:46.247 07:12:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.247 07:12:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.247 07:12:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:11:46.247 07:12:19 -- bdev/nbd_common.sh@41 -- # break 00:11:46.247 07:12:19 -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.247 07:12:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.247 07:12:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:11:46.506 07:12:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:11:46.506 07:12:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:11:46.506 07:12:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:11:46.506 07:12:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.506 07:12:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.506 07:12:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:11:46.506 07:12:20 -- bdev/nbd_common.sh@41 -- # break 00:11:46.506 07:12:20 -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.506 07:12:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.506 07:12:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:11:46.766 07:12:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:11:46.766 07:12:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:11:46.766 07:12:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:11:46.766 07:12:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.766 07:12:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.766 07:12:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:11:46.766 07:12:20 -- bdev/nbd_common.sh@41 -- # break 00:11:46.766 07:12:20 -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.766 07:12:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.766 07:12:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:11:47.025 07:12:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:11:47.025 07:12:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:11:47.025 07:12:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:11:47.025 07:12:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:47.025 07:12:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:47.025 07:12:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:11:47.025 07:12:20 -- bdev/nbd_common.sh@41 -- # break 00:11:47.025 07:12:20 -- bdev/nbd_common.sh@45 -- # return 0 00:11:47.025 07:12:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:47.025 07:12:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:11:47.283 07:12:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:11:47.283 07:12:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:11:47.283 07:12:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:11:47.283 07:12:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:47.283 07:12:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:47.283 07:12:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:11:47.283 07:12:20 -- bdev/nbd_common.sh@41 -- # break 00:11:47.283 07:12:20 -- bdev/nbd_common.sh@45 -- # return 0 00:11:47.283 07:12:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:11:47.283 07:12:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:47.283 07:12:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:11:47.542 07:12:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:11:47.542 07:12:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:11:47.542 07:12:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:11:47.542 07:12:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:11:47.542 07:12:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:11:47.542 07:12:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:11:47.542 07:12:21 -- bdev/nbd_common.sh@65 -- # true 00:11:47.542 07:12:21 -- bdev/nbd_common.sh@65 -- # count=0 00:11:47.542 07:12:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:11:47.542 07:12:21 -- bdev/nbd_common.sh@104 -- # count=0 00:11:47.542 07:12:21 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:11:47.542 07:12:21 -- bdev/nbd_common.sh@109 -- # return 0 00:11:47.542 07:12:21 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:11:47.542 07:12:21 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:47.542 07:12:21 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:11:47.542 07:12:21 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:11:47.543 07:12:21 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:11:47.543 07:12:21 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:11:47.802 malloc_lvol_verify 00:11:47.802 07:12:21 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:11:48.061 5be643b0-7109-4dcc-9c15-3185fb39281f 00:11:48.061 07:12:21 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:11:48.321 729aaef8-f1a0-4e96-8a9c-03d75a44d5ac 00:11:48.321 07:12:21 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:11:48.321 /dev/nbd0 00:11:48.321 07:12:21 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:11:48.321 mke2fs 1.45.5 (07-Jan-2020) 00:11:48.321 00:11:48.321 Filesystem too small for a journal 00:11:48.321 Creating filesystem with 1024 4k blocks and 1024 inodes 00:11:48.321 00:11:48.321 Allocating group tables: 0/1 done 00:11:48.321 Writing inode tables: 0/1 done 00:11:48.321 Writing superblocks and filesystem accounting information: 0/1 done 00:11:48.321 00:11:48.321 07:12:21 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:11:48.321 07:12:21 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:11:48.321 07:12:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:11:48.321 07:12:21 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:11:48.321 07:12:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:48.321 07:12:21 -- bdev/nbd_common.sh@51 -- # local i 00:11:48.321 07:12:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:48.321 07:12:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:11:48.580 07:12:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:48.580 07:12:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:48.580 07:12:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:48.580 07:12:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:48.580 07:12:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:48.580 07:12:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:48.580 07:12:22 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:11:48.839 07:12:22 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:11:48.839 07:12:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:48.839 07:12:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:48.839 07:12:22 -- bdev/nbd_common.sh@41 -- # break 00:11:48.839 07:12:22 -- bdev/nbd_common.sh@45 -- # return 0 00:11:48.839 07:12:22 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:11:48.839 07:12:22 -- bdev/nbd_common.sh@147 -- # return 0 00:11:48.839 07:12:22 -- bdev/blockdev.sh@324 -- # killprocess 112639 00:11:48.839 07:12:22 -- common/autotest_common.sh@924 -- # '[' -z 112639 ']' 00:11:48.839 07:12:22 -- common/autotest_common.sh@928 -- # kill -0 112639 00:11:48.839 07:12:22 -- common/autotest_common.sh@929 -- # uname 00:11:48.839 07:12:22 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:11:48.839 07:12:22 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 112639 00:11:48.839 killing process with pid 112639 00:11:48.839 07:12:22 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:11:48.839 07:12:22 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:11:48.839 07:12:22 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 112639' 00:11:48.839 07:12:22 -- common/autotest_common.sh@943 -- # kill 112639 00:11:48.839 07:12:22 -- common/autotest_common.sh@948 -- # wait 112639 00:11:48.839 [2024-02-13 07:12:22.322227] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:11:50.745 ************************************ 00:11:50.745 END TEST bdev_nbd 00:11:50.745 ************************************ 00:11:50.745 07:12:24 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:11:50.745 00:11:50.745 real 0m25.551s 00:11:50.745 user 0m34.628s 00:11:50.745 sys 0m8.738s 00:11:50.745 07:12:24 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:50.745 07:12:24 -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 07:12:24 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:11:50.745 07:12:24 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:11:50.745 07:12:24 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:11:50.745 07:12:24 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:11:50.745 07:12:24 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:11:50.745 07:12:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:50.745 07:12:24 -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 ************************************ 00:11:50.745 START TEST bdev_fio 00:11:50.745 ************************************ 00:11:50.745 07:12:24 -- common/autotest_common.sh@1102 -- # fio_test_suite '' 00:11:50.745 07:12:24 -- bdev/blockdev.sh@329 -- # local env_context 00:11:50.745 07:12:24 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:11:50.745 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:11:50.746 07:12:24 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:11:50.746 07:12:24 -- bdev/blockdev.sh@337 -- # echo '' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:11:50.746 07:12:24 -- bdev/blockdev.sh@337 -- # env_context= 00:11:50.746 07:12:24 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:11:50.746 07:12:24 -- common/autotest_common.sh@1257 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:50.746 07:12:24 -- common/autotest_common.sh@1258 -- # local workload=verify 00:11:50.746 07:12:24 -- common/autotest_common.sh@1259 -- # local bdev_type=AIO 00:11:50.746 07:12:24 -- common/autotest_common.sh@1260 -- # local env_context= 00:11:50.746 07:12:24 -- common/autotest_common.sh@1261 -- # local fio_dir=/usr/src/fio 00:11:50.746 07:12:24 -- common/autotest_common.sh@1263 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:11:50.746 07:12:24 -- common/autotest_common.sh@1268 -- # '[' -z verify ']' 00:11:50.746 07:12:24 -- common/autotest_common.sh@1272 -- # '[' -n '' ']' 00:11:50.746 07:12:24 -- common/autotest_common.sh@1276 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:11:50.746 07:12:24 -- common/autotest_common.sh@1278 -- # cat 00:11:50.746 07:12:24 -- common/autotest_common.sh@1290 -- # '[' verify == verify ']' 00:11:50.746 07:12:24 -- common/autotest_common.sh@1291 -- # cat 00:11:50.746 07:12:24 -- common/autotest_common.sh@1300 -- # '[' AIO == AIO ']' 00:11:50.746 07:12:24 -- common/autotest_common.sh@1301 -- # /usr/src/fio/fio --version 00:11:50.746 07:12:24 -- common/autotest_common.sh@1301 -- # [[ fio-3.28 == *\f\i\o\-\3* ]] 00:11:50.746 07:12:24 -- common/autotest_common.sh@1302 -- # echo serialize_overlap=1 00:11:50.746 07:12:24 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:50.746 07:12:24 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:11:50.746 07:12:24 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:50.746 07:12:24 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:11:50.746 07:12:24 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:50.746 07:12:24 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:11:50.746 07:12:24 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:50.746 07:12:24 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:11:50.746 07:12:24 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:50.746 07:12:24 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:11:50.746 07:12:24 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:50.746 07:12:24 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:11:50.746 07:12:24 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:50.746 07:12:24 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:11:50.746 07:12:24 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:50.746 07:12:24 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:11:50.746 07:12:24 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:50.746 07:12:24 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:11:50.746 07:12:24 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:50.746 07:12:24 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:11:50.746 07:12:24 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:50.746 07:12:24 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:11:50.746 07:12:24 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:50.746 07:12:24 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:11:50.746 07:12:24 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:50.746 07:12:24 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:11:50.746 07:12:24 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:50.746 07:12:24 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:11:50.746 07:12:24 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:50.746 07:12:24 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:11:50.746 07:12:24 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:11:50.746 07:12:24 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:11:50.746 07:12:24 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:11:50.746 07:12:24 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:50.746 07:12:24 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:11:50.746 07:12:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:11:50.746 07:12:24 -- common/autotest_common.sh@10 -- # set +x 00:11:50.746 ************************************ 00:11:50.746 START TEST bdev_fio_rw_verify 00:11:50.746 ************************************ 00:11:50.746 07:12:24 -- common/autotest_common.sh@1102 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:50.746 07:12:24 -- common/autotest_common.sh@1333 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:50.746 07:12:24 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:11:50.746 07:12:24 -- common/autotest_common.sh@1316 -- # sanitizers=(libasan libclang_rt.asan) 00:11:50.746 07:12:24 -- common/autotest_common.sh@1316 -- # local sanitizers 00:11:50.746 07:12:24 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:50.746 07:12:24 -- common/autotest_common.sh@1318 -- # shift 00:11:50.746 07:12:24 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:11:50.746 07:12:24 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:11:50.746 07:12:24 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:11:50.746 07:12:24 -- common/autotest_common.sh@1322 -- # grep libasan 00:11:50.746 07:12:24 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:11:50.746 07:12:24 -- common/autotest_common.sh@1322 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:11:50.746 07:12:24 -- common/autotest_common.sh@1323 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:11:50.746 07:12:24 -- common/autotest_common.sh@1324 -- # break 00:11:50.746 07:12:24 -- common/autotest_common.sh@1329 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:11:50.746 07:12:24 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:11:51.006 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.006 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.006 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.006 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.006 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.006 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.006 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.006 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.006 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.006 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.006 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.006 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.006 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.006 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.006 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.006 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:11:51.006 fio-3.28 00:11:51.006 Starting 16 threads 00:12:03.213 00:12:03.213 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=113884: Tue Feb 13 07:12:36 2024 00:12:03.213 read: IOPS=239k, BW=934MiB/s (979MB/s)(2644MiB/2832msec) 00:12:03.213 slat (usec): min=2, max=32054, avg=44.92, stdev=471.22 00:12:03.213 clat (usec): min=11, max=32326, avg=339.45, stdev=1297.67 00:12:03.213 lat (usec): min=32, max=32347, avg=384.37, stdev=1380.03 00:12:03.213 clat percentiles (usec): 00:12:03.213 | 50.000th=[ 212], 99.000th=[ 848], 99.900th=[16319], 99.990th=[24249], 00:12:03.213 | 99.999th=[28443] 00:12:03.213 write: IOPS=107k, BW=419MiB/s (439MB/s)(4141MiB/9882msec); 0 zone resets 00:12:03.213 slat (usec): min=13, max=43510, avg=74.20, stdev=619.53 00:12:03.213 clat (usec): min=9, max=43874, avg=440.59, stdev=1510.45 00:12:03.213 lat (usec): min=40, max=43903, avg=514.79, stdev=1631.61 00:12:03.213 clat percentiles (usec): 00:12:03.213 | 50.000th=[ 269], 99.000th=[ 6128], 99.900th=[16450], 99.990th=[28181], 00:12:03.214 | 99.999th=[38536] 00:12:03.214 bw ( KiB/s): min=278080, max=682656, per=98.12%, avg=421015.25, stdev=6990.27, samples=305 00:12:03.214 iops : min=69520, max=170664, avg=105253.80, stdev=1747.57, samples=305 00:12:03.214 lat (usec) : 10=0.01%, 20=0.01%, 50=0.35%, 100=6.17%, 250=45.98% 00:12:03.214 lat (usec) : 500=41.67%, 750=3.27%, 1000=1.15% 00:12:03.214 lat (msec) : 2=0.39%, 4=0.06%, 10=0.13%, 20=0.78%, 50=0.04% 00:12:03.214 cpu : usr=58.48%, sys=1.90%, ctx=219636, majf=0, minf=73539 00:12:03.214 IO depths : 1=11.5%, 2=23.9%, 4=51.6%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:03.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.214 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:03.214 issued rwts: total=676875,1060026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:03.214 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:03.214 00:12:03.214 Run status group 0 (all jobs): 00:12:03.214 READ: bw=934MiB/s (979MB/s), 934MiB/s-934MiB/s (979MB/s-979MB/s), io=2644MiB (2772MB), run=2832-2832msec 00:12:03.214 WRITE: bw=419MiB/s (439MB/s), 419MiB/s-419MiB/s (439MB/s-439MB/s), io=4141MiB (4342MB), run=9882-9882msec 00:12:05.119 ----------------------------------------------------- 00:12:05.120 Suppressions used: 00:12:05.120 count bytes template 00:12:05.120 17 146 /usr/src/fio/parse.c 00:12:05.120 10027 882376 /usr/src/fio/iolog.c 00:12:05.120 2 596 libcrypto.so 00:12:05.120 ----------------------------------------------------- 00:12:05.120 00:12:05.120 ************************************ 00:12:05.120 END TEST bdev_fio_rw_verify 00:12:05.120 ************************************ 00:12:05.120 00:12:05.120 real 0m14.027s 00:12:05.120 user 1m39.359s 00:12:05.120 sys 0m4.019s 00:12:05.120 07:12:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:05.120 07:12:38 -- common/autotest_common.sh@10 -- # set +x 00:12:05.120 07:12:38 -- bdev/blockdev.sh@348 -- # rm -f 00:12:05.120 07:12:38 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:05.120 07:12:38 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:12:05.120 07:12:38 -- common/autotest_common.sh@1257 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:05.120 07:12:38 -- common/autotest_common.sh@1258 -- # local workload=trim 00:12:05.120 07:12:38 -- common/autotest_common.sh@1259 -- # local bdev_type= 00:12:05.120 07:12:38 -- common/autotest_common.sh@1260 -- # local env_context= 00:12:05.120 07:12:38 -- common/autotest_common.sh@1261 -- # local fio_dir=/usr/src/fio 00:12:05.120 07:12:38 -- common/autotest_common.sh@1263 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:05.120 07:12:38 -- common/autotest_common.sh@1268 -- # '[' -z trim ']' 00:12:05.120 07:12:38 -- common/autotest_common.sh@1272 -- # '[' -n '' ']' 00:12:05.120 07:12:38 -- common/autotest_common.sh@1276 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:05.120 07:12:38 -- common/autotest_common.sh@1278 -- # cat 00:12:05.120 07:12:38 -- common/autotest_common.sh@1290 -- # '[' trim == verify ']' 00:12:05.120 07:12:38 -- common/autotest_common.sh@1305 -- # '[' trim == trim ']' 00:12:05.120 07:12:38 -- common/autotest_common.sh@1306 -- # echo rw=trimwrite 00:12:05.120 07:12:38 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:05.121 07:12:38 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d5d8eff6-af25-4b58-8523-58307d53879c"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d5d8eff6-af25-4b58-8523-58307d53879c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "48d7a191-d6f7-5371-bba8-2cd34696d5bf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "48d7a191-d6f7-5371-bba8-2cd34696d5bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "cb0b4a8f-8d7e-5066-aa2b-7fc24602379e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "cb0b4a8f-8d7e-5066-aa2b-7fc24602379e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "2b30e472-15b0-562e-b17b-f0e8be65e5d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2b30e472-15b0-562e-b17b-f0e8be65e5d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "136bd053-7aa3-518f-a1b0-9b1382e0225d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "136bd053-7aa3-518f-a1b0-9b1382e0225d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "227a74ce-4963-57e2-a130-4eca26be30b6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "227a74ce-4963-57e2-a130-4eca26be30b6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "19938833-656d-5fab-8db4-603a68e23284"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "19938833-656d-5fab-8db4-603a68e23284",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "d09ba780-6652-583c-8489-d80753efe1c1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d09ba780-6652-583c-8489-d80753efe1c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "78ae8ebb-ccdf-5ac7-a449-1cc5e22307e3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "78ae8ebb-ccdf-5ac7-a449-1cc5e22307e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0d5c9065-7597-5d67-9337-4762e1383a16"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0d5c9065-7597-5d67-9337-4762e1383a16",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "2d67eb24-31d9-598b-a6c7-f9431678fa4d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2d67eb24-31d9-598b-a6c7-f9431678fa4d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "c5b21be7-ec59-5d4d-a76b-07ca73c71c6e"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c5b21be7-ec59-5d4d-a76b-07ca73c71c6e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "fc36f74b-8793-4768-a67c-037e73a1e493"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fc36f74b-8793-4768-a67c-037e73a1e493",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fc36f74b-8793-4768-a67c-037e73a1e493",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "158437f4-31db-434c-88e2-159f1f336e98",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "717aecce-4f12-40ac-9ff9-8dc3e840f940",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "147efd5e-002f-42d7-b738-7464790a0098"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "147efd5e-002f-42d7-b738-7464790a0098",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "147efd5e-002f-42d7-b738-7464790a0098",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "c355cfbe-caac-4695-b623-fe8c2d267e75",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ba4e14d7-49f7-4f1f-b903-3b4183f158ca",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "cafdea3c-0128-4ba2-a340-8263cb93b2d3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "cafdea3c-0128-4ba2-a340-8263cb93b2d3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "cafdea3c-0128-4ba2-a340-8263cb93b2d3",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "c74bae9a-8217-4ab0-8f77-2265ff8e3137",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "2fce627c-a8a9-4401-91b2-1de1a2657850",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "98b2f15c-11c7-4bbe-975e-14c0b08e853f"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "98b2f15c-11c7-4bbe-975e-14c0b08e853f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:05.121 07:12:38 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:12:05.121 Malloc1p0 00:12:05.121 Malloc1p1 00:12:05.121 Malloc2p0 00:12:05.121 Malloc2p1 00:12:05.121 Malloc2p2 00:12:05.121 Malloc2p3 00:12:05.121 Malloc2p4 00:12:05.121 Malloc2p5 00:12:05.121 Malloc2p6 00:12:05.121 Malloc2p7 00:12:05.121 TestPT 00:12:05.121 raid0 00:12:05.121 concat0 ]] 00:12:05.121 07:12:38 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:12:05.122 07:12:38 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "d5d8eff6-af25-4b58-8523-58307d53879c"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "d5d8eff6-af25-4b58-8523-58307d53879c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "48d7a191-d6f7-5371-bba8-2cd34696d5bf"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "48d7a191-d6f7-5371-bba8-2cd34696d5bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "cb0b4a8f-8d7e-5066-aa2b-7fc24602379e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "cb0b4a8f-8d7e-5066-aa2b-7fc24602379e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "2b30e472-15b0-562e-b17b-f0e8be65e5d0"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2b30e472-15b0-562e-b17b-f0e8be65e5d0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "136bd053-7aa3-518f-a1b0-9b1382e0225d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "136bd053-7aa3-518f-a1b0-9b1382e0225d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "227a74ce-4963-57e2-a130-4eca26be30b6"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "227a74ce-4963-57e2-a130-4eca26be30b6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "19938833-656d-5fab-8db4-603a68e23284"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "19938833-656d-5fab-8db4-603a68e23284",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "d09ba780-6652-583c-8489-d80753efe1c1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d09ba780-6652-583c-8489-d80753efe1c1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "78ae8ebb-ccdf-5ac7-a449-1cc5e22307e3"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "78ae8ebb-ccdf-5ac7-a449-1cc5e22307e3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "0d5c9065-7597-5d67-9337-4762e1383a16"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "0d5c9065-7597-5d67-9337-4762e1383a16",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "2d67eb24-31d9-598b-a6c7-f9431678fa4d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2d67eb24-31d9-598b-a6c7-f9431678fa4d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "c5b21be7-ec59-5d4d-a76b-07ca73c71c6e"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "c5b21be7-ec59-5d4d-a76b-07ca73c71c6e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "fc36f74b-8793-4768-a67c-037e73a1e493"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fc36f74b-8793-4768-a67c-037e73a1e493",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "fc36f74b-8793-4768-a67c-037e73a1e493",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "158437f4-31db-434c-88e2-159f1f336e98",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "717aecce-4f12-40ac-9ff9-8dc3e840f940",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "147efd5e-002f-42d7-b738-7464790a0098"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "147efd5e-002f-42d7-b738-7464790a0098",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "147efd5e-002f-42d7-b738-7464790a0098",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "c355cfbe-caac-4695-b623-fe8c2d267e75",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "ba4e14d7-49f7-4f1f-b903-3b4183f158ca",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "cafdea3c-0128-4ba2-a340-8263cb93b2d3"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "cafdea3c-0128-4ba2-a340-8263cb93b2d3",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "cafdea3c-0128-4ba2-a340-8263cb93b2d3",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "c74bae9a-8217-4ab0-8f77-2265ff8e3137",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "2fce627c-a8a9-4401-91b2-1de1a2657850",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "98b2f15c-11c7-4bbe-975e-14c0b08e853f"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "98b2f15c-11c7-4bbe-975e-14c0b08e853f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:12:05.122 07:12:38 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:05.122 07:12:38 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:12:05.122 07:12:38 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:12:05.122 07:12:38 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:05.122 07:12:38 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:12:05.122 07:12:38 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:12:05.122 07:12:38 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:05.122 07:12:38 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:12:05.122 07:12:38 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:12:05.122 07:12:38 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:05.122 07:12:38 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:12:05.122 07:12:38 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:12:05.122 07:12:38 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:05.122 07:12:38 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:12:05.122 07:12:38 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:12:05.122 07:12:38 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:05.122 07:12:38 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:12:05.122 07:12:38 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:12:05.122 07:12:38 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:05.122 07:12:38 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:12:05.122 07:12:38 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:12:05.122 07:12:38 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:05.122 07:12:38 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:12:05.122 07:12:38 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:12:05.122 07:12:38 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:05.122 07:12:38 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:12:05.122 07:12:38 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:12:05.122 07:12:38 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:05.122 07:12:38 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:12:05.122 07:12:38 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:12:05.122 07:12:38 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:05.122 07:12:38 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:12:05.122 07:12:38 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:12:05.122 07:12:38 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:05.122 07:12:38 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:12:05.122 07:12:38 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:12:05.122 07:12:38 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:05.122 07:12:38 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:12:05.122 07:12:38 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:12:05.122 07:12:38 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:12:05.122 07:12:38 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:12:05.122 07:12:38 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:12:05.122 07:12:38 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:05.122 07:12:38 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:12:05.122 07:12:38 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:05.123 07:12:38 -- common/autotest_common.sh@10 -- # set +x 00:12:05.123 ************************************ 00:12:05.123 START TEST bdev_fio_trim 00:12:05.123 ************************************ 00:12:05.123 07:12:38 -- common/autotest_common.sh@1102 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:05.123 07:12:38 -- common/autotest_common.sh@1333 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:05.123 07:12:38 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:12:05.123 07:12:38 -- common/autotest_common.sh@1316 -- # sanitizers=(libasan libclang_rt.asan) 00:12:05.123 07:12:38 -- common/autotest_common.sh@1316 -- # local sanitizers 00:12:05.123 07:12:38 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:05.123 07:12:38 -- common/autotest_common.sh@1318 -- # shift 00:12:05.123 07:12:38 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:12:05.123 07:12:38 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:12:05.123 07:12:38 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:05.123 07:12:38 -- common/autotest_common.sh@1322 -- # grep libasan 00:12:05.123 07:12:38 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:12:05.123 07:12:38 -- common/autotest_common.sh@1322 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:12:05.123 07:12:38 -- common/autotest_common.sh@1323 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:12:05.123 07:12:38 -- common/autotest_common.sh@1324 -- # break 00:12:05.123 07:12:38 -- common/autotest_common.sh@1329 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:05.123 07:12:38 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:05.123 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.123 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.123 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.123 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.123 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.123 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.123 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.123 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.123 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.123 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.123 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.123 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.123 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.123 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:05.123 fio-3.28 00:12:05.123 Starting 14 threads 00:12:17.344 00:12:17.344 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=114127: Tue Feb 13 07:12:50 2024 00:12:17.344 write: IOPS=123k, BW=481MiB/s (504MB/s)(4815MiB/10017msec); 0 zone resets 00:12:17.344 slat (usec): min=2, max=29961, avg=43.33, stdev=436.22 00:12:17.344 clat (usec): min=17, max=40234, avg=269.56, stdev=1077.25 00:12:17.344 lat (usec): min=34, max=40269, avg=312.89, stdev=1161.90 00:12:17.344 clat percentiles (usec): 00:12:17.344 | 50.000th=[ 186], 99.000th=[ 396], 99.900th=[16319], 99.990th=[20317], 00:12:17.344 | 99.999th=[28181] 00:12:17.344 bw ( KiB/s): min=336016, max=683384, per=99.64%, avg=490423.43, stdev=8989.19, samples=268 00:12:17.344 iops : min=84004, max=170846, avg=122605.83, stdev=2247.30, samples=268 00:12:17.344 trim: IOPS=123k, BW=481MiB/s (504MB/s)(4815MiB/10017msec) 00:12:17.344 slat (usec): min=4, max=28028, avg=28.71, stdev=348.99 00:12:17.344 clat (usec): min=4, max=40269, avg=311.85, stdev=1160.83 00:12:17.344 lat (usec): min=14, max=40296, avg=340.56, stdev=1211.76 00:12:17.344 clat percentiles (usec): 00:12:17.344 | 50.000th=[ 217], 99.000th=[ 437], 99.900th=[16319], 99.990th=[20317], 00:12:17.344 | 99.999th=[28181] 00:12:17.344 bw ( KiB/s): min=336024, max=683448, per=99.64%, avg=490426.80, stdev=8989.74, samples=268 00:12:17.344 iops : min=84006, max=170862, avg=122606.67, stdev=2247.44, samples=268 00:12:17.344 lat (usec) : 10=0.01%, 20=0.03%, 50=0.36%, 100=5.64%, 250=65.81% 00:12:17.344 lat (usec) : 500=27.45%, 750=0.04%, 1000=0.02% 00:12:17.344 lat (msec) : 2=0.02%, 4=0.01%, 10=0.10%, 20=0.49%, 50=0.01% 00:12:17.344 cpu : usr=69.02%, sys=0.39%, ctx=171520, majf=0, minf=733 00:12:17.344 IO depths : 1=12.5%, 2=24.9%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:17.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:17.344 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:17.344 issued rwts: total=0,1232625,1232628,0 short=0,0,0,0 dropped=0,0,0,0 00:12:17.344 latency : target=0, window=0, percentile=100.00%, depth=8 00:12:17.344 00:12:17.344 Run status group 0 (all jobs): 00:12:17.344 WRITE: bw=481MiB/s (504MB/s), 481MiB/s-481MiB/s (504MB/s-504MB/s), io=4815MiB (5049MB), run=10017-10017msec 00:12:17.344 TRIM: bw=481MiB/s (504MB/s), 481MiB/s-481MiB/s (504MB/s-504MB/s), io=4815MiB (5049MB), run=10017-10017msec 00:12:18.734 ----------------------------------------------------- 00:12:18.734 Suppressions used: 00:12:18.734 count bytes template 00:12:18.734 15 135 /usr/src/fio/parse.c 00:12:18.734 2 596 libcrypto.so 00:12:18.734 ----------------------------------------------------- 00:12:18.734 00:12:18.734 ************************************ 00:12:18.734 END TEST bdev_fio_trim 00:12:18.734 ************************************ 00:12:18.734 00:12:18.734 real 0m13.886s 00:12:18.734 user 1m41.554s 00:12:18.734 sys 0m1.310s 00:12:18.734 07:12:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:18.734 07:12:52 -- common/autotest_common.sh@10 -- # set +x 00:12:19.068 07:12:52 -- bdev/blockdev.sh@366 -- # rm -f 00:12:19.068 07:12:52 -- bdev/blockdev.sh@367 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:19.068 /home/vagrant/spdk_repo/spdk 00:12:19.068 07:12:52 -- bdev/blockdev.sh@368 -- # popd 00:12:19.068 07:12:52 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:12:19.068 ************************************ 00:12:19.068 END TEST bdev_fio 00:12:19.068 ************************************ 00:12:19.068 00:12:19.068 real 0m28.298s 00:12:19.068 user 3m21.121s 00:12:19.068 sys 0m5.429s 00:12:19.068 07:12:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:19.068 07:12:52 -- common/autotest_common.sh@10 -- # set +x 00:12:19.068 07:12:52 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:19.068 07:12:52 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:19.068 07:12:52 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:12:19.068 07:12:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:19.068 07:12:52 -- common/autotest_common.sh@10 -- # set +x 00:12:19.068 ************************************ 00:12:19.068 START TEST bdev_verify 00:12:19.068 ************************************ 00:12:19.068 07:12:52 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:12:19.068 [2024-02-13 07:12:52.602301] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:12:19.068 [2024-02-13 07:12:52.602505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114335 ] 00:12:19.327 [2024-02-13 07:12:52.778484] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:19.585 [2024-02-13 07:12:53.025205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.585 [2024-02-13 07:12:53.025201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.585 [2024-02-13 07:12:53.025495] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:19.844 [2024-02-13 07:12:53.389962] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:19.844 [2024-02-13 07:12:53.390105] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:19.844 [2024-02-13 07:12:53.397912] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:19.844 [2024-02-13 07:12:53.397978] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:19.844 [2024-02-13 07:12:53.405981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:19.844 [2024-02-13 07:12:53.406046] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:19.844 [2024-02-13 07:12:53.406082] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:20.102 [2024-02-13 07:12:53.595864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:20.102 [2024-02-13 07:12:53.596012] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.102 [2024-02-13 07:12:53.596064] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:20.102 [2024-02-13 07:12:53.596111] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.102 [2024-02-13 07:12:53.598923] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.102 [2024-02-13 07:12:53.598977] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:20.362 Running I/O for 5 seconds... 00:12:25.633 00:12:25.633 Latency(us) 00:12:25.633 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:25.633 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x0 length 0x1000 00:12:25.633 Malloc0 : 5.16 1802.46 7.04 0.00 0.00 70625.33 1951.19 196369.69 00:12:25.633 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x1000 length 0x1000 00:12:25.633 Malloc0 : 5.16 1802.29 7.04 0.00 0.00 70649.77 1824.58 198276.19 00:12:25.633 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x0 length 0x800 00:12:25.633 Malloc1p0 : 5.16 1250.59 4.89 0.00 0.00 101627.14 3842.79 119156.36 00:12:25.633 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x800 length 0x800 00:12:25.633 Malloc1p0 : 5.16 1250.40 4.88 0.00 0.00 101677.33 3902.37 120109.61 00:12:25.633 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x0 length 0x800 00:12:25.633 Malloc1p1 : 5.16 1250.00 4.88 0.00 0.00 101511.87 3872.58 114866.73 00:12:25.633 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x800 length 0x800 00:12:25.633 Malloc1p1 : 5.16 1249.82 4.88 0.00 0.00 101548.20 3813.00 116296.61 00:12:25.633 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x0 length 0x200 00:12:25.633 Malloc2p0 : 5.16 1249.40 4.88 0.00 0.00 101391.80 3813.00 111530.36 00:12:25.633 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x200 length 0x200 00:12:25.633 Malloc2p0 : 5.16 1249.23 4.88 0.00 0.00 101437.01 3783.21 112483.61 00:12:25.633 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x0 length 0x200 00:12:25.633 Malloc2p1 : 5.17 1248.78 4.88 0.00 0.00 101285.79 3768.32 108193.98 00:12:25.633 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x200 length 0x200 00:12:25.633 Malloc2p1 : 5.17 1248.65 4.88 0.00 0.00 101309.04 3753.43 109147.23 00:12:25.633 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x0 length 0x200 00:12:25.633 Malloc2p2 : 5.17 1248.19 4.88 0.00 0.00 101154.40 3813.00 104857.60 00:12:25.633 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x200 length 0x200 00:12:25.633 Malloc2p2 : 5.17 1248.08 4.88 0.00 0.00 101191.30 3842.79 105334.23 00:12:25.633 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x0 length 0x200 00:12:25.633 Malloc2p3 : 5.17 1247.62 4.87 0.00 0.00 101055.12 3678.95 101044.60 00:12:25.633 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x200 length 0x200 00:12:25.633 Malloc2p3 : 5.17 1247.53 4.87 0.00 0.00 101082.65 3738.53 101997.85 00:12:25.633 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x0 length 0x200 00:12:25.633 Malloc2p4 : 5.17 1247.02 4.87 0.00 0.00 100943.67 3649.16 97708.22 00:12:25.633 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x200 length 0x200 00:12:25.633 Malloc2p4 : 5.17 1246.95 4.87 0.00 0.00 100957.13 3708.74 98184.84 00:12:25.633 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x0 length 0x200 00:12:25.633 Malloc2p5 : 5.18 1246.46 4.87 0.00 0.00 100830.36 3574.69 94371.84 00:12:25.633 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x200 length 0x200 00:12:25.633 Malloc2p5 : 5.18 1246.40 4.87 0.00 0.00 100852.11 3708.74 94848.47 00:12:25.633 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x0 length 0x200 00:12:25.633 Malloc2p6 : 5.18 1245.79 4.87 0.00 0.00 100729.10 3738.53 90558.84 00:12:25.633 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x200 length 0x200 00:12:25.633 Malloc2p6 : 5.18 1245.72 4.87 0.00 0.00 100730.57 3738.53 91035.46 00:12:25.633 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x0 length 0x200 00:12:25.633 Malloc2p7 : 5.18 1245.06 4.86 0.00 0.00 100612.65 3708.74 86745.83 00:12:25.633 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x200 length 0x200 00:12:25.633 Malloc2p7 : 5.18 1245.00 4.86 0.00 0.00 100610.53 3678.95 87222.46 00:12:25.633 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x0 length 0x1000 00:12:25.633 TestPT : 5.18 1230.51 4.81 0.00 0.00 101600.17 6821.70 87222.46 00:12:25.633 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x1000 length 0x1000 00:12:25.633 TestPT : 5.18 1226.78 4.79 0.00 0.00 101899.51 37176.79 88175.71 00:12:25.633 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x0 length 0x2000 00:12:25.633 raid0 : 5.19 1243.75 4.86 0.00 0.00 100326.36 3813.00 80549.70 00:12:25.633 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x2000 length 0x2000 00:12:25.633 raid0 : 5.19 1243.69 4.86 0.00 0.00 100368.89 3961.95 79119.83 00:12:25.633 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x0 length 0x2000 00:12:25.633 concat0 : 5.19 1243.09 4.86 0.00 0.00 100212.64 3664.06 79119.83 00:12:25.633 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x2000 length 0x2000 00:12:25.633 concat0 : 5.19 1243.04 4.86 0.00 0.00 100235.70 3813.00 79596.45 00:12:25.633 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x0 length 0x1000 00:12:25.633 raid1 : 5.20 1258.39 4.92 0.00 0.00 99376.01 2055.45 78643.20 00:12:25.633 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x1000 length 0x1000 00:12:25.633 raid1 : 5.19 1242.35 4.85 0.00 0.00 100117.07 4915.20 79596.45 00:12:25.633 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x0 length 0x4e2 00:12:25.633 AIO0 : 5.20 1257.16 4.91 0.00 0.00 99256.20 3425.75 78643.20 00:12:25.633 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:25.633 Verification LBA range: start 0x4e2 length 0x4e2 00:12:25.633 AIO0 : 5.20 1256.89 4.91 0.00 0.00 99279.40 647.91 80073.08 00:12:25.633 =================================================================================================================== 00:12:25.633 Total : 41007.10 160.18 0.00 0.00 98192.21 647.91 198276.19 00:12:25.892 [2024-02-13 07:12:59.353104] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:12:27.798 00:12:27.798 real 0m8.763s 00:12:27.798 user 0m15.663s 00:12:27.798 sys 0m0.688s 00:12:27.798 ************************************ 00:12:27.798 END TEST bdev_verify 00:12:27.798 07:13:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:27.798 07:13:01 -- common/autotest_common.sh@10 -- # set +x 00:12:27.798 ************************************ 00:12:27.798 07:13:01 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:27.798 07:13:01 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:12:27.798 07:13:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:27.798 07:13:01 -- common/autotest_common.sh@10 -- # set +x 00:12:27.798 ************************************ 00:12:27.798 START TEST bdev_verify_big_io 00:12:27.798 ************************************ 00:12:27.798 07:13:01 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:12:27.798 [2024-02-13 07:13:01.409353] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:12:27.798 [2024-02-13 07:13:01.409618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114487 ] 00:12:28.057 [2024-02-13 07:13:01.589895] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:28.316 [2024-02-13 07:13:01.836342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.316 [2024-02-13 07:13:01.836337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.316 [2024-02-13 07:13:01.836610] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:28.574 [2024-02-13 07:13:02.211201] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:28.574 [2024-02-13 07:13:02.211371] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:28.574 [2024-02-13 07:13:02.219168] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:28.574 [2024-02-13 07:13:02.219256] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:28.574 [2024-02-13 07:13:02.227214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:28.574 [2024-02-13 07:13:02.227282] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:28.574 [2024-02-13 07:13:02.227342] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:28.833 [2024-02-13 07:13:02.417204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:28.833 [2024-02-13 07:13:02.417355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.833 [2024-02-13 07:13:02.417403] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:28.833 [2024-02-13 07:13:02.417427] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.833 [2024-02-13 07:13:02.420101] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.833 [2024-02-13 07:13:02.420167] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:29.401 [2024-02-13 07:13:02.790978] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:29.401 [2024-02-13 07:13:02.794607] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:12:29.401 [2024-02-13 07:13:02.798568] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:29.401 [2024-02-13 07:13:02.802658] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:12:29.401 [2024-02-13 07:13:02.806001] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:29.401 [2024-02-13 07:13:02.810186] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:12:29.401 [2024-02-13 07:13:02.813720] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:29.401 [2024-02-13 07:13:02.817627] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:12:29.401 [2024-02-13 07:13:02.820737] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:29.401 [2024-02-13 07:13:02.824471] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:12:29.401 [2024-02-13 07:13:02.827578] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:29.401 [2024-02-13 07:13:02.831423] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:12:29.401 [2024-02-13 07:13:02.834674] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:29.401 [2024-02-13 07:13:02.838572] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:12:29.401 [2024-02-13 07:13:02.842406] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:29.401 [2024-02-13 07:13:02.845591] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:12:29.401 [2024-02-13 07:13:02.919165] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:29.401 [2024-02-13 07:13:02.925209] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:12:29.401 Running I/O for 5 seconds... 00:12:35.970 00:12:35.970 Latency(us) 00:12:35.970 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.970 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:35.970 Verification LBA range: start 0x0 length 0x100 00:12:35.970 Malloc0 : 5.64 331.11 20.69 0.00 0.00 377840.49 25499.46 1014258.97 00:12:35.970 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:35.970 Verification LBA range: start 0x100 length 0x100 00:12:35.970 Malloc0 : 5.59 356.79 22.30 0.00 0.00 352562.04 21090.68 1128649.08 00:12:35.970 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:35.970 Verification LBA range: start 0x0 length 0x80 00:12:35.970 Malloc1p0 : 5.77 169.53 10.60 0.00 0.00 720798.02 47662.55 1212535.16 00:12:35.970 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:35.970 Verification LBA range: start 0x80 length 0x80 00:12:35.970 Malloc1p0 : 5.60 277.85 17.37 0.00 0.00 447950.53 41704.73 1006632.96 00:12:35.970 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:35.970 Verification LBA range: start 0x0 length 0x80 00:12:35.970 Malloc1p1 : 5.91 113.06 7.07 0.00 0.00 1055116.04 45041.11 2196290.09 00:12:35.970 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:35.970 Verification LBA range: start 0x80 length 0x80 00:12:35.970 Malloc1p1 : 5.77 125.38 7.84 0.00 0.00 963305.14 40274.85 2013265.92 00:12:35.971 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x0 length 0x20 00:12:35.971 Malloc2p0 : 5.71 64.41 4.03 0.00 0.00 464879.08 7566.43 796917.76 00:12:35.971 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x20 length 0x20 00:12:35.971 Malloc2p0 : 5.60 69.66 4.35 0.00 0.00 433043.94 7238.75 636771.61 00:12:35.971 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x0 length 0x20 00:12:35.971 Malloc2p1 : 5.71 64.39 4.02 0.00 0.00 462665.87 7983.48 777852.74 00:12:35.971 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x20 length 0x20 00:12:35.971 Malloc2p1 : 5.60 69.64 4.35 0.00 0.00 431326.48 7179.17 621519.59 00:12:35.971 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x0 length 0x20 00:12:35.971 Malloc2p2 : 5.72 64.38 4.02 0.00 0.00 460609.77 7238.75 766413.73 00:12:35.971 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x20 length 0x20 00:12:35.971 Malloc2p2 : 5.60 69.63 4.35 0.00 0.00 429622.51 6613.18 610080.58 00:12:35.971 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x0 length 0x20 00:12:35.971 Malloc2p3 : 5.72 64.36 4.02 0.00 0.00 458546.55 7864.32 751161.72 00:12:35.971 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x20 length 0x20 00:12:35.971 Malloc2p3 : 5.60 69.61 4.35 0.00 0.00 427910.74 6940.86 598641.57 00:12:35.971 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x0 length 0x20 00:12:35.971 Malloc2p4 : 5.72 64.35 4.02 0.00 0.00 456429.02 7804.74 735909.70 00:12:35.971 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x20 length 0x20 00:12:35.971 Malloc2p4 : 5.60 69.59 4.35 0.00 0.00 426239.04 6672.76 587202.56 00:12:35.971 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x0 length 0x20 00:12:35.971 Malloc2p5 : 5.72 64.33 4.02 0.00 0.00 454318.20 9294.20 720657.69 00:12:35.971 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x20 length 0x20 00:12:35.971 Malloc2p5 : 5.61 69.58 4.35 0.00 0.00 424686.19 7119.59 571950.55 00:12:35.971 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x0 length 0x20 00:12:35.971 Malloc2p6 : 5.72 64.32 4.02 0.00 0.00 452190.95 7626.01 705405.67 00:12:35.971 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x20 length 0x20 00:12:35.971 Malloc2p6 : 5.61 69.56 4.35 0.00 0.00 423009.52 7864.32 560511.53 00:12:35.971 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x0 length 0x20 00:12:35.971 Malloc2p7 : 5.72 64.31 4.02 0.00 0.00 450062.33 7208.96 690153.66 00:12:35.971 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x20 length 0x20 00:12:35.971 Malloc2p7 : 5.61 69.55 4.35 0.00 0.00 421290.92 7328.12 549072.52 00:12:35.971 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x0 length 0x100 00:12:35.971 TestPT : 5.85 118.88 7.43 0.00 0.00 959048.08 51713.86 2196290.09 00:12:35.971 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x100 length 0x100 00:12:35.971 TestPT : 5.81 120.51 7.53 0.00 0.00 949675.54 52905.43 2059021.96 00:12:35.971 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x0 length 0x200 00:12:35.971 raid0 : 5.87 125.97 7.87 0.00 0.00 884572.72 46947.61 2211542.11 00:12:35.971 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x200 length 0x200 00:12:35.971 raid0 : 5.81 130.46 8.15 0.00 0.00 872540.55 43372.92 2013265.92 00:12:35.971 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x0 length 0x200 00:12:35.971 concat0 : 5.89 135.37 8.46 0.00 0.00 817873.38 26333.56 2226794.12 00:12:35.971 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x200 length 0x200 00:12:35.971 concat0 : 5.84 136.40 8.52 0.00 0.00 821892.84 25856.93 2028517.93 00:12:35.971 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x0 length 0x100 00:12:35.971 raid1 : 5.90 147.03 9.19 0.00 0.00 738231.26 15073.28 2242046.14 00:12:35.971 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x100 length 0x100 00:12:35.971 raid1 : 5.81 152.93 9.56 0.00 0.00 727604.41 22282.24 2043769.95 00:12:35.971 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x0 length 0x4e 00:12:35.971 AIO0 : 5.97 187.71 11.73 0.00 0.00 347462.52 960.70 1342177.28 00:12:35.971 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:12:35.971 Verification LBA range: start 0x4e length 0x4e 00:12:35.971 AIO0 : 5.84 158.76 9.92 0.00 0.00 422613.94 547.37 1189657.13 00:12:35.971 =================================================================================================================== 00:12:35.971 Total : 3859.39 241.21 0.00 0.00 583236.45 547.37 2242046.14 00:12:35.971 [2024-02-13 07:13:09.310465] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:12:37.873 00:12:37.873 real 0m9.875s 00:12:37.873 user 0m17.971s 00:12:37.873 sys 0m0.637s 00:12:37.873 ************************************ 00:12:37.873 END TEST bdev_verify_big_io 00:12:37.873 07:13:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:37.873 07:13:11 -- common/autotest_common.sh@10 -- # set +x 00:12:37.873 ************************************ 00:12:37.873 07:13:11 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:37.873 07:13:11 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:12:37.873 07:13:11 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:37.873 07:13:11 -- common/autotest_common.sh@10 -- # set +x 00:12:37.873 ************************************ 00:12:37.873 START TEST bdev_write_zeroes 00:12:37.873 ************************************ 00:12:37.873 07:13:11 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:37.873 [2024-02-13 07:13:11.348604] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:12:37.873 [2024-02-13 07:13:11.349230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114647 ] 00:12:37.873 [2024-02-13 07:13:11.531522] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.132 [2024-02-13 07:13:11.756862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.132 [2024-02-13 07:13:11.757008] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:38.700 [2024-02-13 07:13:12.153754] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:38.700 [2024-02-13 07:13:12.153893] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:12:38.700 [2024-02-13 07:13:12.161727] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:38.700 [2024-02-13 07:13:12.161810] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:12:38.700 [2024-02-13 07:13:12.169766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:38.700 [2024-02-13 07:13:12.169813] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:12:38.700 [2024-02-13 07:13:12.169842] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:12:38.700 [2024-02-13 07:13:12.371822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:12:38.700 [2024-02-13 07:13:12.371940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.700 [2024-02-13 07:13:12.371977] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:38.700 [2024-02-13 07:13:12.372007] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.700 [2024-02-13 07:13:12.374765] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.700 [2024-02-13 07:13:12.374821] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:12:39.268 Running I/O for 1 seconds... 00:12:40.204 00:12:40.204 Latency(us) 00:12:40.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.204 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:40.204 Malloc0 : 1.03 5576.99 21.79 0.00 0.00 22933.67 744.73 42896.29 00:12:40.204 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:40.204 Malloc1p0 : 1.03 5570.80 21.76 0.00 0.00 22917.90 1064.96 41943.04 00:12:40.204 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:40.204 Malloc1p1 : 1.04 5564.35 21.74 0.00 0.00 22898.39 960.70 41228.10 00:12:40.204 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:40.204 Malloc2p0 : 1.04 5558.36 21.71 0.00 0.00 22880.07 953.25 40274.85 00:12:40.204 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:40.204 Malloc2p1 : 1.04 5552.17 21.69 0.00 0.00 22853.18 953.25 39321.60 00:12:40.204 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:40.204 Malloc2p2 : 1.04 5545.69 21.66 0.00 0.00 22827.98 1005.38 38606.66 00:12:40.204 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:40.204 Malloc2p3 : 1.04 5539.56 21.64 0.00 0.00 22809.05 1072.41 37653.41 00:12:40.204 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:40.204 Malloc2p4 : 1.04 5533.36 21.61 0.00 0.00 22781.80 997.93 36461.85 00:12:40.204 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:40.204 Malloc2p5 : 1.04 5527.00 21.59 0.00 0.00 22759.66 975.59 35508.60 00:12:40.204 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:40.204 Malloc2p6 : 1.04 5520.94 21.57 0.00 0.00 22736.78 1020.28 34555.35 00:12:40.204 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:40.204 Malloc2p7 : 1.06 5576.63 21.78 0.00 0.00 22465.09 1228.80 33363.78 00:12:40.204 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:40.204 TestPT : 1.06 5570.71 21.76 0.00 0.00 22434.00 1057.51 31933.91 00:12:40.204 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:40.204 raid0 : 1.06 5563.15 21.73 0.00 0.00 22397.59 1742.66 30265.72 00:12:40.204 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:40.204 concat0 : 1.06 5556.34 21.70 0.00 0.00 22345.56 1630.95 28597.53 00:12:40.204 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:40.204 raid1 : 1.06 5548.36 21.67 0.00 0.00 22286.14 2621.44 26214.40 00:12:40.204 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:12:40.204 AIO0 : 1.06 5536.66 21.63 0.00 0.00 22220.78 1630.95 25856.93 00:12:40.204 =================================================================================================================== 00:12:40.204 Total : 88841.08 347.04 0.00 0.00 22656.77 744.73 42896.29 00:12:40.204 [2024-02-13 07:13:13.870287] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:12:42.109 00:12:42.109 real 0m4.497s 00:12:42.109 user 0m3.727s 00:12:42.109 sys 0m0.581s 00:12:42.109 07:13:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:42.109 07:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.109 ************************************ 00:12:42.109 END TEST bdev_write_zeroes 00:12:42.109 ************************************ 00:12:42.370 07:13:15 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:42.370 07:13:15 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:12:42.370 07:13:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:42.370 07:13:15 -- common/autotest_common.sh@10 -- # set +x 00:12:42.370 ************************************ 00:12:42.370 START TEST bdev_json_nonenclosed 00:12:42.370 ************************************ 00:12:42.370 07:13:15 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:42.370 [2024-02-13 07:13:15.903600] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:12:42.370 [2024-02-13 07:13:15.903847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114739 ] 00:12:42.629 [2024-02-13 07:13:16.076311] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.888 [2024-02-13 07:13:16.324672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.888 [2024-02-13 07:13:16.324864] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:42.888 [2024-02-13 07:13:16.325119] json_config.c: 598:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:12:42.888 [2024-02-13 07:13:16.325162] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:42.888 [2024-02-13 07:13:16.325213] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:42.888 [2024-02-13 07:13:16.325265] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:12:43.145 00:12:43.145 real 0m0.897s 00:12:43.145 user 0m0.645s 00:12:43.145 sys 0m0.152s 00:12:43.145 07:13:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:43.145 07:13:16 -- common/autotest_common.sh@10 -- # set +x 00:12:43.145 ************************************ 00:12:43.145 END TEST bdev_json_nonenclosed 00:12:43.145 ************************************ 00:12:43.145 07:13:16 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:43.145 07:13:16 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:12:43.145 07:13:16 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:43.145 07:13:16 -- common/autotest_common.sh@10 -- # set +x 00:12:43.145 ************************************ 00:12:43.145 START TEST bdev_json_nonarray 00:12:43.145 ************************************ 00:12:43.145 07:13:16 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:12:43.404 [2024-02-13 07:13:16.853817] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:12:43.404 [2024-02-13 07:13:16.854033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114777 ] 00:12:43.404 [2024-02-13 07:13:17.023149] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.662 [2024-02-13 07:13:17.242041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.662 [2024-02-13 07:13:17.242175] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:12:43.662 [2024-02-13 07:13:17.242353] json_config.c: 604:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:12:43.662 [2024-02-13 07:13:17.242397] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:12:43.662 [2024-02-13 07:13:17.242445] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:43.662 [2024-02-13 07:13:17.242510] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:12:44.227 00:12:44.227 real 0m0.872s 00:12:44.227 user 0m0.631s 00:12:44.227 sys 0m0.140s 00:12:44.227 07:13:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:44.227 ************************************ 00:12:44.227 END TEST bdev_json_nonarray 00:12:44.227 ************************************ 00:12:44.227 07:13:17 -- common/autotest_common.sh@10 -- # set +x 00:12:44.227 07:13:17 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:12:44.227 07:13:17 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:12:44.227 07:13:17 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:12:44.227 07:13:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:44.227 07:13:17 -- common/autotest_common.sh@10 -- # set +x 00:12:44.227 ************************************ 00:12:44.227 START TEST bdev_qos 00:12:44.227 ************************************ 00:12:44.227 07:13:17 -- common/autotest_common.sh@1102 -- # qos_test_suite '' 00:12:44.227 07:13:17 -- bdev/blockdev.sh@444 -- # QOS_PID=114815 00:12:44.227 Process qos testing pid: 114815 00:12:44.227 07:13:17 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 114815' 00:12:44.227 07:13:17 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:12:44.227 07:13:17 -- bdev/blockdev.sh@447 -- # waitforlisten 114815 00:12:44.227 07:13:17 -- common/autotest_common.sh@817 -- # '[' -z 114815 ']' 00:12:44.227 07:13:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.227 07:13:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:44.227 07:13:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.227 07:13:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:44.227 07:13:17 -- common/autotest_common.sh@10 -- # set +x 00:12:44.227 07:13:17 -- bdev/blockdev.sh@443 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:12:44.227 [2024-02-13 07:13:17.785177] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:12:44.227 [2024-02-13 07:13:17.785664] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114815 ] 00:12:44.486 [2024-02-13 07:13:17.959062] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.744 [2024-02-13 07:13:18.213484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.310 07:13:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:45.310 07:13:18 -- common/autotest_common.sh@850 -- # return 0 00:12:45.310 07:13:18 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:12:45.310 07:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.310 07:13:18 -- common/autotest_common.sh@10 -- # set +x 00:12:45.310 Malloc_0 00:12:45.310 07:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.310 07:13:18 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:12:45.310 07:13:18 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_0 00:12:45.310 07:13:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:45.310 07:13:18 -- common/autotest_common.sh@887 -- # local i 00:12:45.310 07:13:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:45.310 07:13:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:45.310 07:13:18 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:12:45.310 07:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.310 07:13:18 -- common/autotest_common.sh@10 -- # set +x 00:12:45.310 07:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.310 07:13:18 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:12:45.310 07:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.310 07:13:18 -- common/autotest_common.sh@10 -- # set +x 00:12:45.310 [ 00:12:45.310 { 00:12:45.310 "name": "Malloc_0", 00:12:45.310 "aliases": [ 00:12:45.310 "fced2516-4dd2-45c0-80ac-11519becf32d" 00:12:45.310 ], 00:12:45.310 "product_name": "Malloc disk", 00:12:45.310 "block_size": 512, 00:12:45.310 "num_blocks": 262144, 00:12:45.310 "uuid": "fced2516-4dd2-45c0-80ac-11519becf32d", 00:12:45.310 "assigned_rate_limits": { 00:12:45.310 "rw_ios_per_sec": 0, 00:12:45.310 "rw_mbytes_per_sec": 0, 00:12:45.310 "r_mbytes_per_sec": 0, 00:12:45.310 "w_mbytes_per_sec": 0 00:12:45.310 }, 00:12:45.310 "claimed": false, 00:12:45.310 "zoned": false, 00:12:45.310 "supported_io_types": { 00:12:45.310 "read": true, 00:12:45.310 "write": true, 00:12:45.310 "unmap": true, 00:12:45.310 "write_zeroes": true, 00:12:45.310 "flush": true, 00:12:45.310 "reset": true, 00:12:45.310 "compare": false, 00:12:45.310 "compare_and_write": false, 00:12:45.310 "abort": true, 00:12:45.310 "nvme_admin": false, 00:12:45.310 "nvme_io": false 00:12:45.310 }, 00:12:45.310 "memory_domains": [ 00:12:45.310 { 00:12:45.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.310 "dma_device_type": 2 00:12:45.310 } 00:12:45.310 ], 00:12:45.310 "driver_specific": {} 00:12:45.310 } 00:12:45.310 ] 00:12:45.310 07:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.310 07:13:18 -- common/autotest_common.sh@893 -- # return 0 00:12:45.310 07:13:18 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:12:45.310 07:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.310 07:13:18 -- common/autotest_common.sh@10 -- # set +x 00:12:45.310 Null_1 00:12:45.310 07:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.310 07:13:18 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:12:45.310 07:13:18 -- common/autotest_common.sh@885 -- # local bdev_name=Null_1 00:12:45.310 07:13:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:12:45.310 07:13:18 -- common/autotest_common.sh@887 -- # local i 00:12:45.310 07:13:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:12:45.310 07:13:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:12:45.310 07:13:18 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:12:45.310 07:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.310 07:13:18 -- common/autotest_common.sh@10 -- # set +x 00:12:45.310 07:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.310 07:13:18 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:12:45.310 07:13:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:45.310 07:13:18 -- common/autotest_common.sh@10 -- # set +x 00:12:45.310 [ 00:12:45.310 { 00:12:45.310 "name": "Null_1", 00:12:45.310 "aliases": [ 00:12:45.310 "d1b55023-ca57-4026-8f00-00725f4dc02b" 00:12:45.310 ], 00:12:45.310 "product_name": "Null disk", 00:12:45.310 "block_size": 512, 00:12:45.310 "num_blocks": 262144, 00:12:45.310 "uuid": "d1b55023-ca57-4026-8f00-00725f4dc02b", 00:12:45.310 "assigned_rate_limits": { 00:12:45.310 "rw_ios_per_sec": 0, 00:12:45.310 "rw_mbytes_per_sec": 0, 00:12:45.310 "r_mbytes_per_sec": 0, 00:12:45.310 "w_mbytes_per_sec": 0 00:12:45.310 }, 00:12:45.310 "claimed": false, 00:12:45.310 "zoned": false, 00:12:45.310 "supported_io_types": { 00:12:45.310 "read": true, 00:12:45.310 "write": true, 00:12:45.310 "unmap": false, 00:12:45.310 "write_zeroes": true, 00:12:45.310 "flush": false, 00:12:45.310 "reset": true, 00:12:45.310 "compare": false, 00:12:45.310 "compare_and_write": false, 00:12:45.311 "abort": true, 00:12:45.311 "nvme_admin": false, 00:12:45.311 "nvme_io": false 00:12:45.311 }, 00:12:45.311 "driver_specific": {} 00:12:45.311 } 00:12:45.311 ] 00:12:45.311 07:13:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:45.311 07:13:18 -- common/autotest_common.sh@893 -- # return 0 00:12:45.311 07:13:18 -- bdev/blockdev.sh@455 -- # qos_function_test 00:12:45.311 07:13:18 -- bdev/blockdev.sh@454 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:45.311 07:13:18 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:12:45.311 07:13:18 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:12:45.311 07:13:18 -- bdev/blockdev.sh@410 -- # local io_result=0 00:12:45.311 07:13:18 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:12:45.311 07:13:18 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:12:45.311 07:13:18 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:12:45.311 07:13:18 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:12:45.311 07:13:18 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:12:45.311 07:13:18 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:45.311 07:13:18 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:45.311 07:13:18 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:12:45.311 07:13:18 -- bdev/blockdev.sh@376 -- # tail -1 00:12:45.578 Running I/O for 60 seconds... 00:12:50.844 07:13:24 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 71780.85 287123.39 0.00 0.00 289792.00 0.00 0.00 ' 00:12:50.845 07:13:24 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:12:50.845 07:13:24 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:12:50.845 07:13:24 -- bdev/blockdev.sh@378 -- # iostat_result=71780.85 00:12:50.845 07:13:24 -- bdev/blockdev.sh@383 -- # echo 71780 00:12:50.845 07:13:24 -- bdev/blockdev.sh@414 -- # io_result=71780 00:12:50.845 07:13:24 -- bdev/blockdev.sh@416 -- # iops_limit=17000 00:12:50.845 07:13:24 -- bdev/blockdev.sh@417 -- # '[' 17000 -gt 1000 ']' 00:12:50.845 07:13:24 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 17000 Malloc_0 00:12:50.845 07:13:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.845 07:13:24 -- common/autotest_common.sh@10 -- # set +x 00:12:50.845 07:13:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.845 07:13:24 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 17000 IOPS Malloc_0 00:12:50.845 07:13:24 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:12:50.845 07:13:24 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:12:50.845 07:13:24 -- common/autotest_common.sh@10 -- # set +x 00:12:50.845 ************************************ 00:12:50.845 START TEST bdev_qos_iops 00:12:50.845 ************************************ 00:12:50.845 07:13:24 -- common/autotest_common.sh@1102 -- # run_qos_test 17000 IOPS Malloc_0 00:12:50.845 07:13:24 -- bdev/blockdev.sh@387 -- # local qos_limit=17000 00:12:50.845 07:13:24 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:12:50.845 07:13:24 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:12:50.845 07:13:24 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:12:50.845 07:13:24 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:12:50.845 07:13:24 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:50.845 07:13:24 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:50.845 07:13:24 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:12:50.845 07:13:24 -- bdev/blockdev.sh@376 -- # tail -1 00:12:56.125 07:13:29 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 17055.13 68220.52 0.00 0.00 69292.00 0.00 0.00 ' 00:12:56.125 07:13:29 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:12:56.125 07:13:29 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:12:56.125 07:13:29 -- bdev/blockdev.sh@378 -- # iostat_result=17055.13 00:12:56.125 07:13:29 -- bdev/blockdev.sh@383 -- # echo 17055 00:12:56.125 07:13:29 -- bdev/blockdev.sh@390 -- # qos_result=17055 00:12:56.125 07:13:29 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:12:56.125 07:13:29 -- bdev/blockdev.sh@394 -- # lower_limit=15300 00:12:56.125 07:13:29 -- bdev/blockdev.sh@395 -- # upper_limit=18700 00:12:56.125 07:13:29 -- bdev/blockdev.sh@398 -- # '[' 17055 -lt 15300 ']' 00:12:56.125 07:13:29 -- bdev/blockdev.sh@398 -- # '[' 17055 -gt 18700 ']' 00:12:56.125 00:12:56.125 real 0m5.207s 00:12:56.125 user 0m0.119s 00:12:56.125 sys 0m0.022s 00:12:56.125 ************************************ 00:12:56.125 END TEST bdev_qos_iops 00:12:56.125 ************************************ 00:12:56.125 07:13:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:56.125 07:13:29 -- common/autotest_common.sh@10 -- # set +x 00:12:56.125 07:13:29 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:12:56.125 07:13:29 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:12:56.125 07:13:29 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:12:56.125 07:13:29 -- bdev/blockdev.sh@375 -- # local iostat_result 00:12:56.125 07:13:29 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:12:56.125 07:13:29 -- bdev/blockdev.sh@376 -- # grep Null_1 00:12:56.125 07:13:29 -- bdev/blockdev.sh@376 -- # tail -1 00:13:01.399 07:13:34 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 23827.60 95310.40 0.00 0.00 97280.00 0.00 0.00 ' 00:13:01.399 07:13:34 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:01.399 07:13:34 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:01.399 07:13:34 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:01.399 07:13:34 -- bdev/blockdev.sh@380 -- # iostat_result=97280.00 00:13:01.399 07:13:34 -- bdev/blockdev.sh@383 -- # echo 97280 00:13:01.399 07:13:34 -- bdev/blockdev.sh@425 -- # bw_limit=97280 00:13:01.399 07:13:34 -- bdev/blockdev.sh@426 -- # bw_limit=9 00:13:01.399 07:13:34 -- bdev/blockdev.sh@427 -- # '[' 9 -lt 2 ']' 00:13:01.399 07:13:34 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 9 Null_1 00:13:01.399 07:13:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:01.399 07:13:34 -- common/autotest_common.sh@10 -- # set +x 00:13:01.399 07:13:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:01.399 07:13:34 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 9 BANDWIDTH Null_1 00:13:01.399 07:13:34 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:13:01.399 07:13:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:01.399 07:13:34 -- common/autotest_common.sh@10 -- # set +x 00:13:01.399 ************************************ 00:13:01.399 START TEST bdev_qos_bw 00:13:01.399 ************************************ 00:13:01.399 07:13:34 -- common/autotest_common.sh@1102 -- # run_qos_test 9 BANDWIDTH Null_1 00:13:01.399 07:13:34 -- bdev/blockdev.sh@387 -- # local qos_limit=9 00:13:01.399 07:13:34 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:01.399 07:13:34 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:13:01.399 07:13:34 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:01.399 07:13:34 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:13:01.399 07:13:34 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:01.399 07:13:34 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:01.399 07:13:34 -- bdev/blockdev.sh@376 -- # grep Null_1 00:13:01.399 07:13:34 -- bdev/blockdev.sh@376 -- # tail -1 00:13:06.666 07:13:39 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 2298.36 9193.45 0.00 0.00 9344.00 0.00 0.00 ' 00:13:06.666 07:13:39 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:06.666 07:13:39 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:06.666 07:13:39 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:06.666 07:13:39 -- bdev/blockdev.sh@380 -- # iostat_result=9344.00 00:13:06.666 07:13:39 -- bdev/blockdev.sh@383 -- # echo 9344 00:13:06.666 07:13:39 -- bdev/blockdev.sh@390 -- # qos_result=9344 00:13:06.666 07:13:39 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:06.666 07:13:39 -- bdev/blockdev.sh@392 -- # qos_limit=9216 00:13:06.666 07:13:39 -- bdev/blockdev.sh@394 -- # lower_limit=8294 00:13:06.666 07:13:39 -- bdev/blockdev.sh@395 -- # upper_limit=10137 00:13:06.666 07:13:39 -- bdev/blockdev.sh@398 -- # '[' 9344 -lt 8294 ']' 00:13:06.666 07:13:39 -- bdev/blockdev.sh@398 -- # '[' 9344 -gt 10137 ']' 00:13:06.666 ************************************ 00:13:06.666 END TEST bdev_qos_bw 00:13:06.666 ************************************ 00:13:06.666 00:13:06.666 real 0m5.223s 00:13:06.666 user 0m0.128s 00:13:06.666 sys 0m0.006s 00:13:06.666 07:13:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:06.666 07:13:39 -- common/autotest_common.sh@10 -- # set +x 00:13:06.666 07:13:39 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:13:06.666 07:13:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:06.666 07:13:39 -- common/autotest_common.sh@10 -- # set +x 00:13:06.666 07:13:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:06.666 07:13:39 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:13:06.666 07:13:39 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:13:06.666 07:13:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:06.666 07:13:39 -- common/autotest_common.sh@10 -- # set +x 00:13:06.666 ************************************ 00:13:06.666 START TEST bdev_qos_ro_bw 00:13:06.666 ************************************ 00:13:06.666 07:13:39 -- common/autotest_common.sh@1102 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:13:06.666 07:13:39 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:13:06.666 07:13:39 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:13:06.666 07:13:39 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:13:06.666 07:13:39 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:13:06.666 07:13:39 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:13:06.666 07:13:39 -- bdev/blockdev.sh@375 -- # local iostat_result 00:13:06.666 07:13:39 -- bdev/blockdev.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:13:06.666 07:13:39 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:13:06.666 07:13:39 -- bdev/blockdev.sh@376 -- # tail -1 00:13:11.936 07:13:45 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 512.86 2051.43 0.00 0.00 2072.00 0.00 0.00 ' 00:13:11.936 07:13:45 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:13:11.936 07:13:45 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:11.936 07:13:45 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:13:11.936 07:13:45 -- bdev/blockdev.sh@380 -- # iostat_result=2072.00 00:13:11.936 07:13:45 -- bdev/blockdev.sh@383 -- # echo 2072 00:13:11.936 07:13:45 -- bdev/blockdev.sh@390 -- # qos_result=2072 00:13:11.936 07:13:45 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:13:11.936 07:13:45 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:13:11.936 07:13:45 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:13:11.936 07:13:45 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:13:11.936 07:13:45 -- bdev/blockdev.sh@398 -- # '[' 2072 -lt 1843 ']' 00:13:11.936 07:13:45 -- bdev/blockdev.sh@398 -- # '[' 2072 -gt 2252 ']' 00:13:11.936 00:13:11.936 real 0m5.162s 00:13:11.936 user 0m0.100s 00:13:11.936 sys 0m0.032s 00:13:11.936 07:13:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:11.936 07:13:45 -- common/autotest_common.sh@10 -- # set +x 00:13:11.936 ************************************ 00:13:11.936 END TEST bdev_qos_ro_bw 00:13:11.936 ************************************ 00:13:11.936 07:13:45 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:13:11.936 07:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:11.936 07:13:45 -- common/autotest_common.sh@10 -- # set +x 00:13:12.195 07:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.195 07:13:45 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:13:12.195 07:13:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.195 07:13:45 -- common/autotest_common.sh@10 -- # set +x 00:13:12.453 00:13:12.453 Latency(us) 00:13:12.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.453 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:12.453 Malloc_0 : 26.64 24246.81 94.71 0.00 0.00 10461.15 2144.81 503316.48 00:13:12.453 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:12.453 Null_1 : 26.84 26438.08 103.27 0.00 0.00 9660.14 700.04 206855.45 00:13:12.453 =================================================================================================================== 00:13:12.454 Total : 50684.89 197.99 0.00 0.00 10041.78 700.04 503316.48 00:13:12.454 0 00:13:12.454 07:13:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.454 07:13:45 -- bdev/blockdev.sh@459 -- # killprocess 114815 00:13:12.454 07:13:45 -- common/autotest_common.sh@924 -- # '[' -z 114815 ']' 00:13:12.454 07:13:45 -- common/autotest_common.sh@928 -- # kill -0 114815 00:13:12.454 07:13:45 -- common/autotest_common.sh@929 -- # uname 00:13:12.454 07:13:45 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:12.454 07:13:45 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 114815 00:13:12.454 07:13:45 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:13:12.454 killing process with pid 114815 00:13:12.454 07:13:45 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:13:12.454 07:13:45 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 114815' 00:13:12.454 Received shutdown signal, test time was about 26.873025 seconds 00:13:12.454 00:13:12.454 Latency(us) 00:13:12.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.454 =================================================================================================================== 00:13:12.454 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:12.454 07:13:45 -- common/autotest_common.sh@943 -- # kill 114815 00:13:12.454 07:13:45 -- common/autotest_common.sh@948 -- # wait 114815 00:13:13.830 07:13:47 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:13:13.830 00:13:13.830 real 0m29.374s 00:13:13.830 user 0m30.107s 00:13:13.830 sys 0m0.651s 00:13:13.830 07:13:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:13.830 ************************************ 00:13:13.830 END TEST bdev_qos 00:13:13.830 ************************************ 00:13:13.830 07:13:47 -- common/autotest_common.sh@10 -- # set +x 00:13:13.830 07:13:47 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:13:13.830 07:13:47 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:13.830 07:13:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:13.830 07:13:47 -- common/autotest_common.sh@10 -- # set +x 00:13:13.830 ************************************ 00:13:13.830 START TEST bdev_qd_sampling 00:13:13.830 ************************************ 00:13:13.830 07:13:47 -- common/autotest_common.sh@1102 -- # qd_sampling_test_suite '' 00:13:13.830 07:13:47 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:13:13.830 07:13:47 -- bdev/blockdev.sh@539 -- # QD_PID=115339 00:13:13.830 Process bdev QD sampling period testing pid: 115339 00:13:13.830 07:13:47 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 115339' 00:13:13.830 07:13:47 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:13:13.831 07:13:47 -- bdev/blockdev.sh@542 -- # waitforlisten 115339 00:13:13.831 07:13:47 -- bdev/blockdev.sh@538 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:13:13.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.831 07:13:47 -- common/autotest_common.sh@817 -- # '[' -z 115339 ']' 00:13:13.831 07:13:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.831 07:13:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:13.831 07:13:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.831 07:13:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:13.831 07:13:47 -- common/autotest_common.sh@10 -- # set +x 00:13:13.831 [2024-02-13 07:13:47.210961] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:13:13.831 [2024-02-13 07:13:47.211163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115339 ] 00:13:13.831 [2024-02-13 07:13:47.387858] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:14.090 [2024-02-13 07:13:47.637636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.090 [2024-02-13 07:13:47.637632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.657 07:13:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:14.657 07:13:48 -- common/autotest_common.sh@850 -- # return 0 00:13:14.657 07:13:48 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:13:14.657 07:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:14.657 07:13:48 -- common/autotest_common.sh@10 -- # set +x 00:13:14.657 Malloc_QD 00:13:14.657 07:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:14.657 07:13:48 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:13:14.657 07:13:48 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_QD 00:13:14.657 07:13:48 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:14.657 07:13:48 -- common/autotest_common.sh@887 -- # local i 00:13:14.657 07:13:48 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:14.657 07:13:48 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:14.657 07:13:48 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:13:14.657 07:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:14.657 07:13:48 -- common/autotest_common.sh@10 -- # set +x 00:13:14.657 07:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:14.657 07:13:48 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:13:14.657 07:13:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:14.657 07:13:48 -- common/autotest_common.sh@10 -- # set +x 00:13:14.657 [ 00:13:14.657 { 00:13:14.657 "name": "Malloc_QD", 00:13:14.657 "aliases": [ 00:13:14.657 "a70c79a6-1f74-4fc6-887c-1e097aa61776" 00:13:14.657 ], 00:13:14.657 "product_name": "Malloc disk", 00:13:14.657 "block_size": 512, 00:13:14.657 "num_blocks": 262144, 00:13:14.657 "uuid": "a70c79a6-1f74-4fc6-887c-1e097aa61776", 00:13:14.657 "assigned_rate_limits": { 00:13:14.657 "rw_ios_per_sec": 0, 00:13:14.657 "rw_mbytes_per_sec": 0, 00:13:14.657 "r_mbytes_per_sec": 0, 00:13:14.657 "w_mbytes_per_sec": 0 00:13:14.657 }, 00:13:14.657 "claimed": false, 00:13:14.657 "zoned": false, 00:13:14.657 "supported_io_types": { 00:13:14.657 "read": true, 00:13:14.657 "write": true, 00:13:14.657 "unmap": true, 00:13:14.657 "write_zeroes": true, 00:13:14.657 "flush": true, 00:13:14.657 "reset": true, 00:13:14.657 "compare": false, 00:13:14.657 "compare_and_write": false, 00:13:14.657 "abort": true, 00:13:14.657 "nvme_admin": false, 00:13:14.657 "nvme_io": false 00:13:14.657 }, 00:13:14.657 "memory_domains": [ 00:13:14.657 { 00:13:14.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.657 "dma_device_type": 2 00:13:14.657 } 00:13:14.657 ], 00:13:14.657 "driver_specific": {} 00:13:14.657 } 00:13:14.657 ] 00:13:14.657 07:13:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:14.657 07:13:48 -- common/autotest_common.sh@893 -- # return 0 00:13:14.657 07:13:48 -- bdev/blockdev.sh@548 -- # sleep 2 00:13:14.657 07:13:48 -- bdev/blockdev.sh@547 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:14.916 Running I/O for 5 seconds... 00:13:16.843 07:13:50 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:13:16.843 07:13:50 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:13:16.843 07:13:50 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:13:16.843 07:13:50 -- bdev/blockdev.sh@519 -- # local iostats 00:13:16.843 07:13:50 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:13:16.843 07:13:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.843 07:13:50 -- common/autotest_common.sh@10 -- # set +x 00:13:16.843 07:13:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.843 07:13:50 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:13:16.843 07:13:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.843 07:13:50 -- common/autotest_common.sh@10 -- # set +x 00:13:16.843 07:13:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.843 07:13:50 -- bdev/blockdev.sh@523 -- # iostats='{ 00:13:16.843 "tick_rate": 2200000000, 00:13:16.843 "ticks": 1583280499756, 00:13:16.843 "bdevs": [ 00:13:16.843 { 00:13:16.843 "name": "Malloc_QD", 00:13:16.843 "bytes_read": 968921600, 00:13:16.843 "num_read_ops": 236547, 00:13:16.843 "bytes_written": 0, 00:13:16.843 "num_write_ops": 0, 00:13:16.843 "bytes_unmapped": 0, 00:13:16.843 "num_unmap_ops": 0, 00:13:16.843 "bytes_copied": 0, 00:13:16.843 "num_copy_ops": 0, 00:13:16.843 "read_latency_ticks": 2168135145735, 00:13:16.843 "max_read_latency_ticks": 13354136, 00:13:16.843 "min_read_latency_ticks": 329342, 00:13:16.843 "write_latency_ticks": 0, 00:13:16.843 "max_write_latency_ticks": 0, 00:13:16.843 "min_write_latency_ticks": 0, 00:13:16.843 "unmap_latency_ticks": 0, 00:13:16.843 "max_unmap_latency_ticks": 0, 00:13:16.843 "min_unmap_latency_ticks": 0, 00:13:16.843 "copy_latency_ticks": 0, 00:13:16.843 "max_copy_latency_ticks": 0, 00:13:16.843 "min_copy_latency_ticks": 0, 00:13:16.843 "io_error": {}, 00:13:16.843 "queue_depth_polling_period": 10, 00:13:16.843 "queue_depth": 512, 00:13:16.843 "io_time": 20, 00:13:16.843 "weighted_io_time": 10240 00:13:16.843 } 00:13:16.843 ] 00:13:16.843 }' 00:13:16.843 07:13:50 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:13:16.843 07:13:50 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:13:16.843 07:13:50 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:13:16.843 07:13:50 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:13:16.843 07:13:50 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:13:16.843 07:13:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.843 07:13:50 -- common/autotest_common.sh@10 -- # set +x 00:13:16.843 00:13:16.843 Latency(us) 00:13:16.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.843 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:16.843 Malloc_QD : 2.00 60852.79 237.71 0.00 0.00 4197.07 1027.72 6076.97 00:13:16.843 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:16.843 Malloc_QD : 2.00 61719.85 241.09 0.00 0.00 4138.54 822.92 4885.41 00:13:16.843 =================================================================================================================== 00:13:16.843 Total : 122572.64 478.80 0.00 0.00 4167.59 822.92 6076.97 00:13:16.843 0 00:13:16.843 07:13:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.843 07:13:50 -- bdev/blockdev.sh@552 -- # killprocess 115339 00:13:16.843 07:13:50 -- common/autotest_common.sh@924 -- # '[' -z 115339 ']' 00:13:16.843 07:13:50 -- common/autotest_common.sh@928 -- # kill -0 115339 00:13:16.843 07:13:50 -- common/autotest_common.sh@929 -- # uname 00:13:16.843 07:13:50 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:16.843 07:13:50 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 115339 00:13:16.843 07:13:50 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:16.843 07:13:50 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:16.843 07:13:50 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 115339' 00:13:16.843 killing process with pid 115339 00:13:16.843 Received shutdown signal, test time was about 2.130095 seconds 00:13:16.843 00:13:16.843 Latency(us) 00:13:16.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.843 =================================================================================================================== 00:13:16.843 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:16.843 07:13:50 -- common/autotest_common.sh@943 -- # kill 115339 00:13:16.843 07:13:50 -- common/autotest_common.sh@948 -- # wait 115339 00:13:18.222 07:13:51 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:13:18.222 00:13:18.222 real 0m4.629s 00:13:18.222 user 0m8.538s 00:13:18.222 sys 0m0.402s 00:13:18.222 ************************************ 00:13:18.222 END TEST bdev_qd_sampling 00:13:18.222 ************************************ 00:13:18.222 07:13:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:18.222 07:13:51 -- common/autotest_common.sh@10 -- # set +x 00:13:18.222 07:13:51 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:13:18.222 07:13:51 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:18.222 07:13:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:18.222 07:13:51 -- common/autotest_common.sh@10 -- # set +x 00:13:18.222 ************************************ 00:13:18.222 START TEST bdev_error 00:13:18.222 ************************************ 00:13:18.222 07:13:51 -- common/autotest_common.sh@1102 -- # error_test_suite '' 00:13:18.222 07:13:51 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:13:18.222 07:13:51 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:13:18.222 07:13:51 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:13:18.222 07:13:51 -- bdev/blockdev.sh@470 -- # ERR_PID=115426 00:13:18.222 Process error testing pid: 115426 00:13:18.222 07:13:51 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 115426' 00:13:18.222 07:13:51 -- bdev/blockdev.sh@472 -- # waitforlisten 115426 00:13:18.222 07:13:51 -- bdev/blockdev.sh@469 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:13:18.222 07:13:51 -- common/autotest_common.sh@817 -- # '[' -z 115426 ']' 00:13:18.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.222 07:13:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.222 07:13:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:18.222 07:13:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.222 07:13:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:18.222 07:13:51 -- common/autotest_common.sh@10 -- # set +x 00:13:18.222 [2024-02-13 07:13:51.905769] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:13:18.222 [2024-02-13 07:13:51.905967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115426 ] 00:13:18.481 [2024-02-13 07:13:52.075518] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.739 [2024-02-13 07:13:52.273885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.305 07:13:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:19.305 07:13:52 -- common/autotest_common.sh@850 -- # return 0 00:13:19.305 07:13:52 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:19.305 07:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.305 07:13:52 -- common/autotest_common.sh@10 -- # set +x 00:13:19.305 Dev_1 00:13:19.305 07:13:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.305 07:13:52 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:13:19.305 07:13:52 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_1 00:13:19.305 07:13:52 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:19.305 07:13:52 -- common/autotest_common.sh@887 -- # local i 00:13:19.305 07:13:52 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:19.305 07:13:52 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:19.305 07:13:52 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:13:19.305 07:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.305 07:13:52 -- common/autotest_common.sh@10 -- # set +x 00:13:19.305 07:13:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.305 07:13:52 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:19.305 07:13:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.305 07:13:52 -- common/autotest_common.sh@10 -- # set +x 00:13:19.565 [ 00:13:19.565 { 00:13:19.565 "name": "Dev_1", 00:13:19.565 "aliases": [ 00:13:19.565 "ce672160-47ce-469c-a60e-c143162a1105" 00:13:19.565 ], 00:13:19.565 "product_name": "Malloc disk", 00:13:19.565 "block_size": 512, 00:13:19.565 "num_blocks": 262144, 00:13:19.565 "uuid": "ce672160-47ce-469c-a60e-c143162a1105", 00:13:19.565 "assigned_rate_limits": { 00:13:19.565 "rw_ios_per_sec": 0, 00:13:19.565 "rw_mbytes_per_sec": 0, 00:13:19.565 "r_mbytes_per_sec": 0, 00:13:19.565 "w_mbytes_per_sec": 0 00:13:19.565 }, 00:13:19.565 "claimed": false, 00:13:19.565 "zoned": false, 00:13:19.565 "supported_io_types": { 00:13:19.565 "read": true, 00:13:19.565 "write": true, 00:13:19.565 "unmap": true, 00:13:19.565 "write_zeroes": true, 00:13:19.565 "flush": true, 00:13:19.565 "reset": true, 00:13:19.565 "compare": false, 00:13:19.565 "compare_and_write": false, 00:13:19.565 "abort": true, 00:13:19.565 "nvme_admin": false, 00:13:19.565 "nvme_io": false 00:13:19.565 }, 00:13:19.565 "memory_domains": [ 00:13:19.565 { 00:13:19.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.565 "dma_device_type": 2 00:13:19.565 } 00:13:19.565 ], 00:13:19.565 "driver_specific": {} 00:13:19.565 } 00:13:19.565 ] 00:13:19.565 07:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.565 07:13:53 -- common/autotest_common.sh@893 -- # return 0 00:13:19.565 07:13:53 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:13:19.565 07:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.565 07:13:53 -- common/autotest_common.sh@10 -- # set +x 00:13:19.565 true 00:13:19.565 07:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.565 07:13:53 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:19.565 07:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.565 07:13:53 -- common/autotest_common.sh@10 -- # set +x 00:13:19.565 Dev_2 00:13:19.565 07:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.565 07:13:53 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:13:19.565 07:13:53 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_2 00:13:19.565 07:13:53 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:19.565 07:13:53 -- common/autotest_common.sh@887 -- # local i 00:13:19.565 07:13:53 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:19.565 07:13:53 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:19.565 07:13:53 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:13:19.565 07:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.565 07:13:53 -- common/autotest_common.sh@10 -- # set +x 00:13:19.565 07:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.565 07:13:53 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:19.565 07:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.565 07:13:53 -- common/autotest_common.sh@10 -- # set +x 00:13:19.565 [ 00:13:19.565 { 00:13:19.565 "name": "Dev_2", 00:13:19.565 "aliases": [ 00:13:19.565 "323e9295-a311-4509-895f-ce56a044ecc3" 00:13:19.565 ], 00:13:19.565 "product_name": "Malloc disk", 00:13:19.565 "block_size": 512, 00:13:19.565 "num_blocks": 262144, 00:13:19.565 "uuid": "323e9295-a311-4509-895f-ce56a044ecc3", 00:13:19.565 "assigned_rate_limits": { 00:13:19.565 "rw_ios_per_sec": 0, 00:13:19.565 "rw_mbytes_per_sec": 0, 00:13:19.565 "r_mbytes_per_sec": 0, 00:13:19.565 "w_mbytes_per_sec": 0 00:13:19.565 }, 00:13:19.565 "claimed": false, 00:13:19.565 "zoned": false, 00:13:19.565 "supported_io_types": { 00:13:19.565 "read": true, 00:13:19.565 "write": true, 00:13:19.565 "unmap": true, 00:13:19.565 "write_zeroes": true, 00:13:19.565 "flush": true, 00:13:19.565 "reset": true, 00:13:19.565 "compare": false, 00:13:19.565 "compare_and_write": false, 00:13:19.565 "abort": true, 00:13:19.565 "nvme_admin": false, 00:13:19.565 "nvme_io": false 00:13:19.565 }, 00:13:19.565 "memory_domains": [ 00:13:19.565 { 00:13:19.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.565 "dma_device_type": 2 00:13:19.565 } 00:13:19.565 ], 00:13:19.565 "driver_specific": {} 00:13:19.565 } 00:13:19.565 ] 00:13:19.565 07:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.565 07:13:53 -- common/autotest_common.sh@893 -- # return 0 00:13:19.565 07:13:53 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:19.565 07:13:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:19.565 07:13:53 -- common/autotest_common.sh@10 -- # set +x 00:13:19.565 07:13:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:19.565 07:13:53 -- bdev/blockdev.sh@482 -- # sleep 1 00:13:19.565 07:13:53 -- bdev/blockdev.sh@481 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:19.824 Running I/O for 5 seconds... 00:13:20.761 07:13:54 -- bdev/blockdev.sh@485 -- # kill -0 115426 00:13:20.761 07:13:54 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 115426' 00:13:20.761 Process is existed as continue on error is set. Pid: 115426 00:13:20.761 07:13:54 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:13:20.761 07:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.761 07:13:54 -- common/autotest_common.sh@10 -- # set +x 00:13:20.761 07:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:20.761 07:13:54 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:13:20.761 07:13:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:20.761 07:13:54 -- common/autotest_common.sh@10 -- # set +x 00:13:20.761 Timeout while waiting for response: 00:13:20.761 00:13:20.761 00:13:21.020 07:13:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:21.020 07:13:54 -- bdev/blockdev.sh@495 -- # sleep 5 00:13:25.207 00:13:25.207 Latency(us) 00:13:25.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.207 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:25.207 EE_Dev_1 : 0.92 41804.13 163.30 5.43 0.00 380.02 129.40 700.04 00:13:25.207 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:25.207 Dev_2 : 5.00 84551.19 330.28 0.00 0.00 186.57 56.55 297414.28 00:13:25.207 =================================================================================================================== 00:13:25.207 Total : 126355.33 493.58 5.43 0.00 202.72 56.55 297414.28 00:13:26.144 07:13:59 -- bdev/blockdev.sh@497 -- # killprocess 115426 00:13:26.144 07:13:59 -- common/autotest_common.sh@924 -- # '[' -z 115426 ']' 00:13:26.144 07:13:59 -- common/autotest_common.sh@928 -- # kill -0 115426 00:13:26.144 07:13:59 -- common/autotest_common.sh@929 -- # uname 00:13:26.144 07:13:59 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:26.144 07:13:59 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 115426 00:13:26.144 07:13:59 -- common/autotest_common.sh@930 -- # process_name=reactor_1 00:13:26.144 killing process with pid 115426 00:13:26.144 07:13:59 -- common/autotest_common.sh@934 -- # '[' reactor_1 = sudo ']' 00:13:26.144 07:13:59 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 115426' 00:13:26.144 07:13:59 -- common/autotest_common.sh@943 -- # kill 115426 00:13:26.144 Received shutdown signal, test time was about 5.000000 seconds 00:13:26.144 00:13:26.144 Latency(us) 00:13:26.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:26.144 =================================================================================================================== 00:13:26.144 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:26.144 07:13:59 -- common/autotest_common.sh@948 -- # wait 115426 00:13:27.523 07:14:00 -- bdev/blockdev.sh@500 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:13:27.523 07:14:00 -- bdev/blockdev.sh@501 -- # ERR_PID=115563 00:13:27.523 07:14:00 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 115563' 00:13:27.523 Process error testing pid: 115563 00:13:27.523 07:14:00 -- bdev/blockdev.sh@503 -- # waitforlisten 115563 00:13:27.523 07:14:00 -- common/autotest_common.sh@817 -- # '[' -z 115563 ']' 00:13:27.523 07:14:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.523 07:14:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:27.523 07:14:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.523 07:14:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:27.523 07:14:00 -- common/autotest_common.sh@10 -- # set +x 00:13:27.523 [2024-02-13 07:14:00.947927] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:13:27.523 [2024-02-13 07:14:00.948101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115563 ] 00:13:27.523 [2024-02-13 07:14:01.104551] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.782 [2024-02-13 07:14:01.303616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.349 07:14:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:28.349 07:14:01 -- common/autotest_common.sh@850 -- # return 0 00:13:28.349 07:14:01 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:13:28.349 07:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.349 07:14:01 -- common/autotest_common.sh@10 -- # set +x 00:13:28.349 Dev_1 00:13:28.349 07:14:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.349 07:14:01 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:13:28.349 07:14:01 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_1 00:13:28.349 07:14:01 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:28.349 07:14:01 -- common/autotest_common.sh@887 -- # local i 00:13:28.349 07:14:01 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:28.349 07:14:01 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:28.349 07:14:01 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:13:28.349 07:14:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.349 07:14:01 -- common/autotest_common.sh@10 -- # set +x 00:13:28.349 07:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.349 07:14:02 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:13:28.349 07:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.349 07:14:02 -- common/autotest_common.sh@10 -- # set +x 00:13:28.349 [ 00:13:28.349 { 00:13:28.349 "name": "Dev_1", 00:13:28.349 "aliases": [ 00:13:28.349 "e6d933ba-b04e-4d4b-bbe5-25ef64fe795f" 00:13:28.349 ], 00:13:28.350 "product_name": "Malloc disk", 00:13:28.350 "block_size": 512, 00:13:28.350 "num_blocks": 262144, 00:13:28.350 "uuid": "e6d933ba-b04e-4d4b-bbe5-25ef64fe795f", 00:13:28.350 "assigned_rate_limits": { 00:13:28.350 "rw_ios_per_sec": 0, 00:13:28.350 "rw_mbytes_per_sec": 0, 00:13:28.350 "r_mbytes_per_sec": 0, 00:13:28.350 "w_mbytes_per_sec": 0 00:13:28.350 }, 00:13:28.350 "claimed": false, 00:13:28.350 "zoned": false, 00:13:28.350 "supported_io_types": { 00:13:28.350 "read": true, 00:13:28.350 "write": true, 00:13:28.350 "unmap": true, 00:13:28.350 "write_zeroes": true, 00:13:28.350 "flush": true, 00:13:28.350 "reset": true, 00:13:28.350 "compare": false, 00:13:28.350 "compare_and_write": false, 00:13:28.350 "abort": true, 00:13:28.350 "nvme_admin": false, 00:13:28.350 "nvme_io": false 00:13:28.350 }, 00:13:28.350 "memory_domains": [ 00:13:28.350 { 00:13:28.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.350 "dma_device_type": 2 00:13:28.350 } 00:13:28.350 ], 00:13:28.350 "driver_specific": {} 00:13:28.350 } 00:13:28.350 ] 00:13:28.350 07:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.350 07:14:02 -- common/autotest_common.sh@893 -- # return 0 00:13:28.350 07:14:02 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:13:28.350 07:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.350 07:14:02 -- common/autotest_common.sh@10 -- # set +x 00:13:28.350 true 00:13:28.350 07:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.350 07:14:02 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:13:28.350 07:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.350 07:14:02 -- common/autotest_common.sh@10 -- # set +x 00:13:28.608 Dev_2 00:13:28.608 07:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.608 07:14:02 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:13:28.608 07:14:02 -- common/autotest_common.sh@885 -- # local bdev_name=Dev_2 00:13:28.608 07:14:02 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:28.608 07:14:02 -- common/autotest_common.sh@887 -- # local i 00:13:28.608 07:14:02 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:28.608 07:14:02 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:28.608 07:14:02 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:13:28.608 07:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.608 07:14:02 -- common/autotest_common.sh@10 -- # set +x 00:13:28.608 07:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.608 07:14:02 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:13:28.608 07:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.608 07:14:02 -- common/autotest_common.sh@10 -- # set +x 00:13:28.608 [ 00:13:28.608 { 00:13:28.608 "name": "Dev_2", 00:13:28.608 "aliases": [ 00:13:28.609 "7bec8f04-0c8a-45b5-9c4f-d85a42221e52" 00:13:28.609 ], 00:13:28.609 "product_name": "Malloc disk", 00:13:28.609 "block_size": 512, 00:13:28.609 "num_blocks": 262144, 00:13:28.609 "uuid": "7bec8f04-0c8a-45b5-9c4f-d85a42221e52", 00:13:28.609 "assigned_rate_limits": { 00:13:28.609 "rw_ios_per_sec": 0, 00:13:28.609 "rw_mbytes_per_sec": 0, 00:13:28.609 "r_mbytes_per_sec": 0, 00:13:28.609 "w_mbytes_per_sec": 0 00:13:28.609 }, 00:13:28.609 "claimed": false, 00:13:28.609 "zoned": false, 00:13:28.609 "supported_io_types": { 00:13:28.609 "read": true, 00:13:28.609 "write": true, 00:13:28.609 "unmap": true, 00:13:28.609 "write_zeroes": true, 00:13:28.609 "flush": true, 00:13:28.609 "reset": true, 00:13:28.609 "compare": false, 00:13:28.609 "compare_and_write": false, 00:13:28.609 "abort": true, 00:13:28.609 "nvme_admin": false, 00:13:28.609 "nvme_io": false 00:13:28.609 }, 00:13:28.609 "memory_domains": [ 00:13:28.609 { 00:13:28.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.609 "dma_device_type": 2 00:13:28.609 } 00:13:28.609 ], 00:13:28.609 "driver_specific": {} 00:13:28.609 } 00:13:28.609 ] 00:13:28.609 07:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.609 07:14:02 -- common/autotest_common.sh@893 -- # return 0 00:13:28.609 07:14:02 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:13:28.609 07:14:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:28.609 07:14:02 -- common/autotest_common.sh@10 -- # set +x 00:13:28.609 07:14:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:28.609 07:14:02 -- bdev/blockdev.sh@513 -- # NOT wait 115563 00:13:28.609 07:14:02 -- bdev/blockdev.sh@512 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:13:28.609 07:14:02 -- common/autotest_common.sh@638 -- # local es=0 00:13:28.609 07:14:02 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 115563 00:13:28.609 07:14:02 -- common/autotest_common.sh@626 -- # local arg=wait 00:13:28.609 07:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:28.609 07:14:02 -- common/autotest_common.sh@630 -- # type -t wait 00:13:28.609 07:14:02 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:13:28.609 07:14:02 -- common/autotest_common.sh@641 -- # wait 115563 00:13:28.867 Running I/O for 5 seconds... 00:13:28.867 task offset: 93384 on job bdev=EE_Dev_1 fails 00:13:28.867 00:13:28.867 Latency(us) 00:13:28.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:28.867 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:28.867 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:13:28.867 EE_Dev_1 : 0.00 31884.06 124.55 7246.38 0.00 338.63 129.40 606.95 00:13:28.867 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:13:28.867 Dev_2 : 0.00 19875.78 77.64 0.00 0.00 589.94 116.83 1102.20 00:13:28.867 =================================================================================================================== 00:13:28.867 Total : 51759.83 202.19 7246.38 0.00 474.93 116.83 1102.20 00:13:28.867 [2024-02-13 07:14:02.305577] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:28.867 request: 00:13:28.867 { 00:13:28.867 "method": "perform_tests", 00:13:28.867 "req_id": 1 00:13:28.867 } 00:13:28.867 Got JSON-RPC error response 00:13:28.867 response: 00:13:28.867 { 00:13:28.867 "code": -32603, 00:13:28.867 "message": "bdevperf failed with error Operation not permitted" 00:13:28.867 } 00:13:30.773 ************************************ 00:13:30.773 END TEST bdev_error 00:13:30.773 ************************************ 00:13:30.773 07:14:03 -- common/autotest_common.sh@641 -- # es=255 00:13:30.773 07:14:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:13:30.773 07:14:03 -- common/autotest_common.sh@650 -- # es=127 00:13:30.773 07:14:03 -- common/autotest_common.sh@651 -- # case "$es" in 00:13:30.773 07:14:03 -- common/autotest_common.sh@658 -- # es=1 00:13:30.773 07:14:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:13:30.773 00:13:30.773 real 0m12.148s 00:13:30.773 user 0m12.275s 00:13:30.773 sys 0m0.867s 00:13:30.773 07:14:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:30.773 07:14:03 -- common/autotest_common.sh@10 -- # set +x 00:13:30.773 07:14:04 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:13:30.773 07:14:04 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:30.773 07:14:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:30.773 07:14:04 -- common/autotest_common.sh@10 -- # set +x 00:13:30.773 ************************************ 00:13:30.773 START TEST bdev_stat 00:13:30.773 ************************************ 00:13:30.773 07:14:04 -- common/autotest_common.sh@1102 -- # stat_test_suite '' 00:13:30.773 07:14:04 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:13:30.773 07:14:04 -- bdev/blockdev.sh@594 -- # STAT_PID=115626 00:13:30.773 07:14:04 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 115626' 00:13:30.773 Process Bdev IO statistics testing pid: 115626 00:13:30.773 07:14:04 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:13:30.773 07:14:04 -- bdev/blockdev.sh@597 -- # waitforlisten 115626 00:13:30.773 07:14:04 -- bdev/blockdev.sh@593 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:13:30.773 07:14:04 -- common/autotest_common.sh@817 -- # '[' -z 115626 ']' 00:13:30.773 07:14:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.773 07:14:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:30.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.773 07:14:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.773 07:14:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:30.773 07:14:04 -- common/autotest_common.sh@10 -- # set +x 00:13:30.773 [2024-02-13 07:14:04.120204] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:13:30.773 [2024-02-13 07:14:04.120411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid115626 ] 00:13:30.773 [2024-02-13 07:14:04.298113] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:31.032 [2024-02-13 07:14:04.496044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.032 [2024-02-13 07:14:04.496038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.600 07:14:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:31.600 07:14:05 -- common/autotest_common.sh@850 -- # return 0 00:13:31.600 07:14:05 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:13:31.600 07:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.600 07:14:05 -- common/autotest_common.sh@10 -- # set +x 00:13:31.600 Malloc_STAT 00:13:31.600 07:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.600 07:14:05 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:13:31.600 07:14:05 -- common/autotest_common.sh@885 -- # local bdev_name=Malloc_STAT 00:13:31.600 07:14:05 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:31.600 07:14:05 -- common/autotest_common.sh@887 -- # local i 00:13:31.600 07:14:05 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:31.600 07:14:05 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:31.600 07:14:05 -- common/autotest_common.sh@890 -- # rpc_cmd bdev_wait_for_examine 00:13:31.600 07:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.600 07:14:05 -- common/autotest_common.sh@10 -- # set +x 00:13:31.600 07:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.600 07:14:05 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:13:31.600 07:14:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:31.600 07:14:05 -- common/autotest_common.sh@10 -- # set +x 00:13:31.600 [ 00:13:31.600 { 00:13:31.600 "name": "Malloc_STAT", 00:13:31.600 "aliases": [ 00:13:31.601 "b854fbc4-abb2-440c-af42-6bd7910acd84" 00:13:31.601 ], 00:13:31.601 "product_name": "Malloc disk", 00:13:31.601 "block_size": 512, 00:13:31.601 "num_blocks": 262144, 00:13:31.601 "uuid": "b854fbc4-abb2-440c-af42-6bd7910acd84", 00:13:31.601 "assigned_rate_limits": { 00:13:31.601 "rw_ios_per_sec": 0, 00:13:31.601 "rw_mbytes_per_sec": 0, 00:13:31.601 "r_mbytes_per_sec": 0, 00:13:31.601 "w_mbytes_per_sec": 0 00:13:31.601 }, 00:13:31.601 "claimed": false, 00:13:31.601 "zoned": false, 00:13:31.601 "supported_io_types": { 00:13:31.601 "read": true, 00:13:31.601 "write": true, 00:13:31.601 "unmap": true, 00:13:31.601 "write_zeroes": true, 00:13:31.601 "flush": true, 00:13:31.601 "reset": true, 00:13:31.601 "compare": false, 00:13:31.601 "compare_and_write": false, 00:13:31.601 "abort": true, 00:13:31.601 "nvme_admin": false, 00:13:31.601 "nvme_io": false 00:13:31.601 }, 00:13:31.601 "memory_domains": [ 00:13:31.601 { 00:13:31.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.601 "dma_device_type": 2 00:13:31.601 } 00:13:31.601 ], 00:13:31.601 "driver_specific": {} 00:13:31.601 } 00:13:31.601 ] 00:13:31.601 07:14:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:31.601 07:14:05 -- common/autotest_common.sh@893 -- # return 0 00:13:31.601 07:14:05 -- bdev/blockdev.sh@603 -- # sleep 2 00:13:31.601 07:14:05 -- bdev/blockdev.sh@602 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:31.859 Running I/O for 10 seconds... 00:13:33.764 07:14:07 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:13:33.764 07:14:07 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:13:33.764 07:14:07 -- bdev/blockdev.sh@558 -- # local iostats 00:13:33.764 07:14:07 -- bdev/blockdev.sh@559 -- # local io_count1 00:13:33.764 07:14:07 -- bdev/blockdev.sh@560 -- # local io_count2 00:13:33.764 07:14:07 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:13:33.764 07:14:07 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:13:33.764 07:14:07 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:13:33.764 07:14:07 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:13:33.764 07:14:07 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:33.764 07:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.764 07:14:07 -- common/autotest_common.sh@10 -- # set +x 00:13:33.764 07:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.764 07:14:07 -- bdev/blockdev.sh@566 -- # iostats='{ 00:13:33.764 "tick_rate": 2200000000, 00:13:33.765 "ticks": 1620525360134, 00:13:33.765 "bdevs": [ 00:13:33.765 { 00:13:33.765 "name": "Malloc_STAT", 00:13:33.765 "bytes_read": 903909888, 00:13:33.765 "num_read_ops": 220675, 00:13:33.765 "bytes_written": 0, 00:13:33.765 "num_write_ops": 0, 00:13:33.765 "bytes_unmapped": 0, 00:13:33.765 "num_unmap_ops": 0, 00:13:33.765 "bytes_copied": 0, 00:13:33.765 "num_copy_ops": 0, 00:13:33.765 "read_latency_ticks": 2138484634007, 00:13:33.765 "max_read_latency_ticks": 13384788, 00:13:33.765 "min_read_latency_ticks": 332644, 00:13:33.765 "write_latency_ticks": 0, 00:13:33.765 "max_write_latency_ticks": 0, 00:13:33.765 "min_write_latency_ticks": 0, 00:13:33.765 "unmap_latency_ticks": 0, 00:13:33.765 "max_unmap_latency_ticks": 0, 00:13:33.765 "min_unmap_latency_ticks": 0, 00:13:33.765 "copy_latency_ticks": 0, 00:13:33.765 "max_copy_latency_ticks": 0, 00:13:33.765 "min_copy_latency_ticks": 0, 00:13:33.765 "io_error": {} 00:13:33.765 } 00:13:33.765 ] 00:13:33.765 }' 00:13:33.765 07:14:07 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:13:33.765 07:14:07 -- bdev/blockdev.sh@567 -- # io_count1=220675 00:13:33.765 07:14:07 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:13:33.765 07:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:33.765 07:14:07 -- common/autotest_common.sh@10 -- # set +x 00:13:33.765 07:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:33.765 07:14:07 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:13:33.765 "tick_rate": 2200000000, 00:13:33.765 "ticks": 1620696938874, 00:13:33.765 "name": "Malloc_STAT", 00:13:33.765 "channels": [ 00:13:33.765 { 00:13:33.765 "thread_id": 2, 00:13:33.765 "bytes_read": 470810624, 00:13:33.765 "num_read_ops": 114944, 00:13:33.765 "bytes_written": 0, 00:13:33.765 "num_write_ops": 0, 00:13:33.765 "bytes_unmapped": 0, 00:13:33.765 "num_unmap_ops": 0, 00:13:33.765 "bytes_copied": 0, 00:13:33.765 "num_copy_ops": 0, 00:13:33.765 "read_latency_ticks": 1112625345302, 00:13:33.765 "max_read_latency_ticks": 12970141, 00:13:33.765 "min_read_latency_ticks": 7383818, 00:13:33.765 "write_latency_ticks": 0, 00:13:33.765 "max_write_latency_ticks": 0, 00:13:33.765 "min_write_latency_ticks": 0, 00:13:33.765 "unmap_latency_ticks": 0, 00:13:33.765 "max_unmap_latency_ticks": 0, 00:13:33.765 "min_unmap_latency_ticks": 0, 00:13:33.765 "copy_latency_ticks": 0, 00:13:33.765 "max_copy_latency_ticks": 0, 00:13:33.765 "min_copy_latency_ticks": 0 00:13:33.765 }, 00:13:33.765 { 00:13:33.765 "thread_id": 3, 00:13:33.765 "bytes_read": 468713472, 00:13:33.765 "num_read_ops": 114432, 00:13:33.765 "bytes_written": 0, 00:13:33.765 "num_write_ops": 0, 00:13:33.765 "bytes_unmapped": 0, 00:13:33.765 "num_unmap_ops": 0, 00:13:33.765 "bytes_copied": 0, 00:13:33.765 "num_copy_ops": 0, 00:13:33.765 "read_latency_ticks": 1113754266365, 00:13:33.765 "max_read_latency_ticks": 13384788, 00:13:33.765 "min_read_latency_ticks": 7401860, 00:13:33.765 "write_latency_ticks": 0, 00:13:33.765 "max_write_latency_ticks": 0, 00:13:33.765 "min_write_latency_ticks": 0, 00:13:33.765 "unmap_latency_ticks": 0, 00:13:33.765 "max_unmap_latency_ticks": 0, 00:13:33.765 "min_unmap_latency_ticks": 0, 00:13:33.765 "copy_latency_ticks": 0, 00:13:33.765 "max_copy_latency_ticks": 0, 00:13:33.765 "min_copy_latency_ticks": 0 00:13:33.765 } 00:13:33.765 ] 00:13:33.765 }' 00:13:33.765 07:14:07 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:13:33.765 07:14:07 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=114944 00:13:33.765 07:14:07 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=114944 00:13:33.765 07:14:07 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:13:34.024 07:14:07 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=114432 00:13:34.024 07:14:07 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=229376 00:13:34.024 07:14:07 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:13:34.024 07:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.024 07:14:07 -- common/autotest_common.sh@10 -- # set +x 00:13:34.024 07:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.024 07:14:07 -- bdev/blockdev.sh@575 -- # iostats='{ 00:13:34.024 "tick_rate": 2200000000, 00:13:34.024 "ticks": 1620990619906, 00:13:34.024 "bdevs": [ 00:13:34.024 { 00:13:34.024 "name": "Malloc_STAT", 00:13:34.024 "bytes_read": 1002476032, 00:13:34.024 "num_read_ops": 244739, 00:13:34.024 "bytes_written": 0, 00:13:34.024 "num_write_ops": 0, 00:13:34.024 "bytes_unmapped": 0, 00:13:34.024 "num_unmap_ops": 0, 00:13:34.024 "bytes_copied": 0, 00:13:34.024 "num_copy_ops": 0, 00:13:34.024 "read_latency_ticks": 2376955826290, 00:13:34.024 "max_read_latency_ticks": 13384788, 00:13:34.024 "min_read_latency_ticks": 332644, 00:13:34.024 "write_latency_ticks": 0, 00:13:34.024 "max_write_latency_ticks": 0, 00:13:34.024 "min_write_latency_ticks": 0, 00:13:34.024 "unmap_latency_ticks": 0, 00:13:34.024 "max_unmap_latency_ticks": 0, 00:13:34.024 "min_unmap_latency_ticks": 0, 00:13:34.024 "copy_latency_ticks": 0, 00:13:34.025 "max_copy_latency_ticks": 0, 00:13:34.025 "min_copy_latency_ticks": 0, 00:13:34.025 "io_error": {} 00:13:34.025 } 00:13:34.025 ] 00:13:34.025 }' 00:13:34.025 07:14:07 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:13:34.025 07:14:07 -- bdev/blockdev.sh@576 -- # io_count2=244739 00:13:34.025 07:14:07 -- bdev/blockdev.sh@581 -- # '[' 229376 -lt 220675 ']' 00:13:34.025 07:14:07 -- bdev/blockdev.sh@581 -- # '[' 229376 -gt 244739 ']' 00:13:34.025 07:14:07 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:13:34.025 07:14:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:34.025 07:14:07 -- common/autotest_common.sh@10 -- # set +x 00:13:34.025 00:13:34.025 Latency(us) 00:13:34.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.025 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:13:34.025 Malloc_STAT : 2.19 58012.41 226.61 0.00 0.00 4403.22 1027.72 5898.24 00:13:34.025 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:13:34.025 Malloc_STAT : 2.19 57737.47 225.54 0.00 0.00 4424.24 796.86 6106.76 00:13:34.025 =================================================================================================================== 00:13:34.025 Total : 115749.88 452.15 0.00 0.00 4413.71 796.86 6106.76 00:13:34.025 0 00:13:34.025 07:14:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:34.025 07:14:07 -- bdev/blockdev.sh@607 -- # killprocess 115626 00:13:34.025 07:14:07 -- common/autotest_common.sh@924 -- # '[' -z 115626 ']' 00:13:34.025 07:14:07 -- common/autotest_common.sh@928 -- # kill -0 115626 00:13:34.025 07:14:07 -- common/autotest_common.sh@929 -- # uname 00:13:34.025 07:14:07 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:34.025 07:14:07 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 115626 00:13:34.025 07:14:07 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:34.025 killing process with pid 115626 00:13:34.025 07:14:07 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:34.025 07:14:07 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 115626' 00:13:34.025 07:14:07 -- common/autotest_common.sh@943 -- # kill 115626 00:13:34.025 Received shutdown signal, test time was about 2.330913 seconds 00:13:34.025 00:13:34.025 Latency(us) 00:13:34.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:34.025 =================================================================================================================== 00:13:34.025 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:34.025 07:14:07 -- common/autotest_common.sh@948 -- # wait 115626 00:13:35.402 07:14:08 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:13:35.402 00:13:35.402 real 0m4.903s 00:13:35.402 user 0m9.309s 00:13:35.402 sys 0m0.460s 00:13:35.402 07:14:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:35.402 07:14:08 -- common/autotest_common.sh@10 -- # set +x 00:13:35.402 ************************************ 00:13:35.402 END TEST bdev_stat 00:13:35.402 ************************************ 00:13:35.402 07:14:08 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:13:35.402 07:14:08 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:13:35.402 07:14:08 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:35.402 07:14:08 -- bdev/blockdev.sh@809 -- # cleanup 00:13:35.402 07:14:08 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:35.402 07:14:08 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:35.402 07:14:08 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:13:35.402 07:14:08 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:13:35.402 07:14:08 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:13:35.402 07:14:08 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:13:35.402 00:13:35.402 real 2m23.021s 00:13:35.402 user 5m53.612s 00:13:35.402 sys 0m20.781s 00:13:35.402 07:14:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:35.402 ************************************ 00:13:35.402 END TEST blockdev_general 00:13:35.402 ************************************ 00:13:35.402 07:14:09 -- common/autotest_common.sh@10 -- # set +x 00:13:35.402 07:14:09 -- spdk/autotest.sh@196 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:35.402 07:14:09 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:13:35.402 07:14:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:35.402 07:14:09 -- common/autotest_common.sh@10 -- # set +x 00:13:35.402 ************************************ 00:13:35.402 START TEST bdev_raid 00:13:35.402 ************************************ 00:13:35.402 07:14:09 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:13:35.660 * Looking for test storage... 00:13:35.660 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:35.660 07:14:09 -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:35.661 07:14:09 -- bdev/nbd_common.sh@6 -- # set -e 00:13:35.661 07:14:09 -- bdev/bdev_raid.sh@14 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:13:35.661 07:14:09 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:13:35.661 07:14:09 -- bdev/bdev_raid.sh@716 -- # uname -s 00:13:35.661 07:14:09 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:13:35.661 07:14:09 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:13:35.661 07:14:09 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:13:35.661 07:14:09 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:13:35.661 07:14:09 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:13:35.661 07:14:09 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:35.661 07:14:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:35.661 07:14:09 -- common/autotest_common.sh@10 -- # set +x 00:13:35.661 ************************************ 00:13:35.661 START TEST raid_function_test_raid0 00:13:35.661 ************************************ 00:13:35.661 07:14:09 -- common/autotest_common.sh@1102 -- # raid_function_test raid0 00:13:35.661 07:14:09 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:13:35.661 07:14:09 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:13:35.661 07:14:09 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:13:35.661 07:14:09 -- bdev/bdev_raid.sh@86 -- # raid_pid=115797 00:13:35.661 07:14:09 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:35.661 07:14:09 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 115797' 00:13:35.661 Process raid pid: 115797 00:13:35.661 07:14:09 -- bdev/bdev_raid.sh@88 -- # waitforlisten 115797 /var/tmp/spdk-raid.sock 00:13:35.661 07:14:09 -- common/autotest_common.sh@817 -- # '[' -z 115797 ']' 00:13:35.661 07:14:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:35.661 07:14:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:35.661 07:14:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:35.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:35.661 07:14:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:35.661 07:14:09 -- common/autotest_common.sh@10 -- # set +x 00:13:35.661 [2024-02-13 07:14:09.244351] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:13:35.661 [2024-02-13 07:14:09.244547] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.919 [2024-02-13 07:14:09.412093] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.919 [2024-02-13 07:14:09.608879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.178 [2024-02-13 07:14:09.803891] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:36.746 07:14:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:36.746 07:14:10 -- common/autotest_common.sh@850 -- # return 0 00:13:36.746 07:14:10 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:13:36.746 07:14:10 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:13:36.746 07:14:10 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:36.746 07:14:10 -- bdev/bdev_raid.sh@70 -- # cat 00:13:36.746 07:14:10 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:37.005 [2024-02-13 07:14:10.457333] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:37.006 [2024-02-13 07:14:10.459534] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:37.006 [2024-02-13 07:14:10.459612] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:13:37.006 [2024-02-13 07:14:10.459625] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:37.006 [2024-02-13 07:14:10.459803] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:13:37.006 [2024-02-13 07:14:10.460206] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:13:37.006 [2024-02-13 07:14:10.460227] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:13:37.006 [2024-02-13 07:14:10.460412] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.006 Base_1 00:13:37.006 Base_2 00:13:37.006 07:14:10 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:37.006 07:14:10 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:37.006 07:14:10 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:13:37.264 07:14:10 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:13:37.264 07:14:10 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:13:37.264 07:14:10 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:37.264 07:14:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:37.264 07:14:10 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:13:37.264 07:14:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:37.264 07:14:10 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:13:37.264 07:14:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:37.264 07:14:10 -- bdev/nbd_common.sh@12 -- # local i 00:13:37.265 07:14:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:37.265 07:14:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.265 07:14:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:37.524 [2024-02-13 07:14:10.997378] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:13:37.524 /dev/nbd0 00:13:37.524 07:14:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:37.524 07:14:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:37.524 07:14:11 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:13:37.524 07:14:11 -- common/autotest_common.sh@855 -- # local i 00:13:37.524 07:14:11 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:37.524 07:14:11 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:37.524 07:14:11 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:13:37.524 07:14:11 -- common/autotest_common.sh@859 -- # break 00:13:37.524 07:14:11 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:37.524 07:14:11 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:37.524 07:14:11 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:37.524 1+0 records in 00:13:37.524 1+0 records out 00:13:37.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423089 s, 9.7 MB/s 00:13:37.524 07:14:11 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.524 07:14:11 -- common/autotest_common.sh@872 -- # size=4096 00:13:37.524 07:14:11 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:37.524 07:14:11 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:37.524 07:14:11 -- common/autotest_common.sh@875 -- # return 0 00:13:37.524 07:14:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:37.524 07:14:11 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:37.524 07:14:11 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:37.524 07:14:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:37.524 07:14:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:37.783 07:14:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:37.783 { 00:13:37.783 "nbd_device": "/dev/nbd0", 00:13:37.783 "bdev_name": "raid" 00:13:37.783 } 00:13:37.783 ]' 00:13:37.783 07:14:11 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:37.783 { 00:13:37.783 "nbd_device": "/dev/nbd0", 00:13:37.783 "bdev_name": "raid" 00:13:37.783 } 00:13:37.783 ]' 00:13:37.783 07:14:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:37.783 07:14:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:37.783 07:14:11 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:37.783 07:14:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:37.783 07:14:11 -- bdev/nbd_common.sh@65 -- # count=1 00:13:37.783 07:14:11 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@98 -- # count=1 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@20 -- # local blksize 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:13:37.783 4096+0 records in 00:13:37.783 4096+0 records out 00:13:37.783 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0269338 s, 77.9 MB/s 00:13:37.783 07:14:11 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:38.043 4096+0 records in 00:13:38.043 4096+0 records out 00:13:38.043 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.257876 s, 8.1 MB/s 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:38.043 128+0 records in 00:13:38.043 128+0 records out 00:13:38.043 65536 bytes (66 kB, 64 KiB) copied, 0.00065986 s, 99.3 MB/s 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:38.043 2035+0 records in 00:13:38.043 2035+0 records out 00:13:38.043 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00716785 s, 145 MB/s 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:38.043 456+0 records in 00:13:38.043 456+0 records out 00:13:38.043 233472 bytes (233 kB, 228 KiB) copied, 0.00205468 s, 114 MB/s 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@53 -- # return 0 00:13:38.043 07:14:11 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:38.043 07:14:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:38.043 07:14:11 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:13:38.043 07:14:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:38.043 07:14:11 -- bdev/nbd_common.sh@51 -- # local i 00:13:38.043 07:14:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.043 07:14:11 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:38.310 07:14:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:38.310 07:14:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:38.310 07:14:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:38.310 07:14:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.310 07:14:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.310 07:14:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:38.310 [2024-02-13 07:14:11.939227] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.310 07:14:11 -- bdev/nbd_common.sh@41 -- # break 00:13:38.310 07:14:11 -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.310 07:14:11 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:38.310 07:14:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:38.310 07:14:11 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:38.581 07:14:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:38.581 07:14:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:38.581 07:14:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:38.840 07:14:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:38.840 07:14:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:38.840 07:14:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:38.840 07:14:12 -- bdev/nbd_common.sh@65 -- # true 00:13:38.840 07:14:12 -- bdev/nbd_common.sh@65 -- # count=0 00:13:38.840 07:14:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:38.840 07:14:12 -- bdev/bdev_raid.sh@106 -- # count=0 00:13:38.840 07:14:12 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:13:38.840 07:14:12 -- bdev/bdev_raid.sh@111 -- # killprocess 115797 00:13:38.840 07:14:12 -- common/autotest_common.sh@924 -- # '[' -z 115797 ']' 00:13:38.840 07:14:12 -- common/autotest_common.sh@928 -- # kill -0 115797 00:13:38.840 07:14:12 -- common/autotest_common.sh@929 -- # uname 00:13:38.840 07:14:12 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:38.840 07:14:12 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 115797 00:13:38.840 07:14:12 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:38.840 killing process with pid 115797 00:13:38.840 07:14:12 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:38.840 07:14:12 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 115797' 00:13:38.840 07:14:12 -- common/autotest_common.sh@943 -- # kill 115797 00:13:38.840 07:14:12 -- common/autotest_common.sh@948 -- # wait 115797 00:13:38.840 [2024-02-13 07:14:12.308504] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:38.840 [2024-02-13 07:14:12.308603] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.840 [2024-02-13 07:14:12.308668] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.840 [2024-02-13 07:14:12.308681] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:13:38.840 [2024-02-13 07:14:12.459389] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:40.219 07:14:13 -- bdev/bdev_raid.sh@113 -- # return 0 00:13:40.219 00:13:40.219 real 0m4.334s 00:13:40.219 user 0m5.517s 00:13:40.219 sys 0m0.979s 00:13:40.219 07:14:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:40.219 07:14:13 -- common/autotest_common.sh@10 -- # set +x 00:13:40.219 ************************************ 00:13:40.219 END TEST raid_function_test_raid0 00:13:40.219 ************************************ 00:13:40.219 07:14:13 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:13:40.219 07:14:13 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:13:40.219 07:14:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:40.219 07:14:13 -- common/autotest_common.sh@10 -- # set +x 00:13:40.219 ************************************ 00:13:40.219 START TEST raid_function_test_concat 00:13:40.219 ************************************ 00:13:40.219 07:14:13 -- common/autotest_common.sh@1102 -- # raid_function_test concat 00:13:40.219 07:14:13 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:13:40.219 07:14:13 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:13:40.219 07:14:13 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:13:40.219 07:14:13 -- bdev/bdev_raid.sh@86 -- # raid_pid=115959 00:13:40.219 Process raid pid: 115959 00:13:40.219 07:14:13 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 115959' 00:13:40.219 07:14:13 -- bdev/bdev_raid.sh@88 -- # waitforlisten 115959 /var/tmp/spdk-raid.sock 00:13:40.219 07:14:13 -- bdev/bdev_raid.sh@85 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:40.219 07:14:13 -- common/autotest_common.sh@817 -- # '[' -z 115959 ']' 00:13:40.219 07:14:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:40.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:40.219 07:14:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:40.219 07:14:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:40.219 07:14:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:40.219 07:14:13 -- common/autotest_common.sh@10 -- # set +x 00:13:40.219 [2024-02-13 07:14:13.631726] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:13:40.219 [2024-02-13 07:14:13.631924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.219 [2024-02-13 07:14:13.799212] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.478 [2024-02-13 07:14:14.005189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.737 [2024-02-13 07:14:14.196860] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.996 07:14:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:40.996 07:14:14 -- common/autotest_common.sh@850 -- # return 0 00:13:40.996 07:14:14 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:13:40.996 07:14:14 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:13:40.996 07:14:14 -- bdev/bdev_raid.sh@68 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:40.996 07:14:14 -- bdev/bdev_raid.sh@70 -- # cat 00:13:40.996 07:14:14 -- bdev/bdev_raid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:13:41.255 [2024-02-13 07:14:14.810015] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:41.255 [2024-02-13 07:14:14.812208] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:41.255 [2024-02-13 07:14:14.812307] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:13:41.255 [2024-02-13 07:14:14.812320] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:41.255 [2024-02-13 07:14:14.812518] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:13:41.255 [2024-02-13 07:14:14.812944] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:13:41.255 [2024-02-13 07:14:14.812968] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x616000007280 00:13:41.255 [2024-02-13 07:14:14.813189] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.255 Base_1 00:13:41.255 Base_2 00:13:41.255 07:14:14 -- bdev/bdev_raid.sh@77 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/bdev/rpcs.txt 00:13:41.255 07:14:14 -- bdev/bdev_raid.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:13:41.255 07:14:14 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:13:41.515 07:14:15 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:13:41.515 07:14:15 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:13:41.515 07:14:15 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:13:41.515 07:14:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:41.515 07:14:15 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:13:41.515 07:14:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:41.515 07:14:15 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:13:41.515 07:14:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:41.515 07:14:15 -- bdev/nbd_common.sh@12 -- # local i 00:13:41.515 07:14:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:41.515 07:14:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:41.515 07:14:15 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:13:41.773 [2024-02-13 07:14:15.318086] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:13:41.773 /dev/nbd0 00:13:41.773 07:14:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:41.773 07:14:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:41.773 07:14:15 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:13:41.773 07:14:15 -- common/autotest_common.sh@855 -- # local i 00:13:41.773 07:14:15 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:13:41.773 07:14:15 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:13:41.773 07:14:15 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:13:41.773 07:14:15 -- common/autotest_common.sh@859 -- # break 00:13:41.773 07:14:15 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:13:41.773 07:14:15 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:13:41.773 07:14:15 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:41.773 1+0 records in 00:13:41.773 1+0 records out 00:13:41.773 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350893 s, 11.7 MB/s 00:13:41.774 07:14:15 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.774 07:14:15 -- common/autotest_common.sh@872 -- # size=4096 00:13:41.774 07:14:15 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:41.774 07:14:15 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:13:41.774 07:14:15 -- common/autotest_common.sh@875 -- # return 0 00:13:41.774 07:14:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:41.774 07:14:15 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:41.774 07:14:15 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:41.774 07:14:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:41.774 07:14:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:42.033 07:14:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:42.033 { 00:13:42.033 "nbd_device": "/dev/nbd0", 00:13:42.033 "bdev_name": "raid" 00:13:42.033 } 00:13:42.033 ]' 00:13:42.033 07:14:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:42.033 { 00:13:42.033 "nbd_device": "/dev/nbd0", 00:13:42.033 "bdev_name": "raid" 00:13:42.033 } 00:13:42.033 ]' 00:13:42.033 07:14:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:42.033 07:14:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:13:42.033 07:14:15 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:13:42.033 07:14:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:42.033 07:14:15 -- bdev/nbd_common.sh@65 -- # count=1 00:13:42.033 07:14:15 -- bdev/nbd_common.sh@66 -- # echo 1 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@98 -- # count=1 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@20 -- # local blksize 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=(0 1028 321) 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=(128 2035 456) 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:13:42.033 4096+0 records in 00:13:42.033 4096+0 records out 00:13:42.033 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0205049 s, 102 MB/s 00:13:42.033 07:14:15 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:13:42.292 4096+0 records in 00:13:42.292 4096+0 records out 00:13:42.292 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.272257 s, 7.7 MB/s 00:13:42.292 07:14:15 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:13:42.292 07:14:15 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:42.292 07:14:15 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:13:42.292 07:14:15 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:42.292 07:14:15 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:13:42.292 07:14:15 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:13:42.292 07:14:15 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:13:42.292 128+0 records in 00:13:42.292 128+0 records out 00:13:42.292 65536 bytes (66 kB, 64 KiB) copied, 0.0007241 s, 90.5 MB/s 00:13:42.292 07:14:15 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:13:42.292 07:14:15 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:42.551 07:14:15 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:42.551 07:14:15 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:42.551 07:14:15 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:42.551 07:14:15 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:13:42.551 07:14:15 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:13:42.551 07:14:15 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:13:42.551 2035+0 records in 00:13:42.551 2035+0 records out 00:13:42.551 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00597282 s, 174 MB/s 00:13:42.551 07:14:15 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:13:42.551 07:14:16 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:42.551 07:14:16 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:42.551 07:14:16 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:42.551 07:14:16 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:42.551 07:14:16 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:13:42.551 07:14:16 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:13:42.551 07:14:16 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:13:42.551 456+0 records in 00:13:42.551 456+0 records out 00:13:42.551 233472 bytes (233 kB, 228 KiB) copied, 0.00140633 s, 166 MB/s 00:13:42.551 07:14:16 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:13:42.551 07:14:16 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:13:42.551 07:14:16 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:13:42.551 07:14:16 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:13:42.551 07:14:16 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:13:42.551 07:14:16 -- bdev/bdev_raid.sh@53 -- # return 0 00:13:42.551 07:14:16 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:42.551 07:14:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:42.551 07:14:16 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:13:42.551 07:14:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:42.551 07:14:16 -- bdev/nbd_common.sh@51 -- # local i 00:13:42.551 07:14:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:42.551 07:14:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:42.810 07:14:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:42.811 07:14:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:42.811 07:14:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:42.811 07:14:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.811 07:14:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.811 07:14:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:42.811 [2024-02-13 07:14:16.270577] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.811 07:14:16 -- bdev/nbd_common.sh@41 -- # break 00:13:42.811 07:14:16 -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.811 07:14:16 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:13:42.811 07:14:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:42.811 07:14:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:13:43.069 07:14:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:43.069 07:14:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:43.069 07:14:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:43.069 07:14:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:43.069 07:14:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:13:43.069 07:14:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:43.070 07:14:16 -- bdev/nbd_common.sh@65 -- # true 00:13:43.070 07:14:16 -- bdev/nbd_common.sh@65 -- # count=0 00:13:43.070 07:14:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:13:43.070 07:14:16 -- bdev/bdev_raid.sh@106 -- # count=0 00:13:43.070 07:14:16 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:13:43.070 07:14:16 -- bdev/bdev_raid.sh@111 -- # killprocess 115959 00:13:43.070 07:14:16 -- common/autotest_common.sh@924 -- # '[' -z 115959 ']' 00:13:43.070 07:14:16 -- common/autotest_common.sh@928 -- # kill -0 115959 00:13:43.070 07:14:16 -- common/autotest_common.sh@929 -- # uname 00:13:43.070 07:14:16 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:43.070 07:14:16 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 115959 00:13:43.070 07:14:16 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:43.070 killing process with pid 115959 00:13:43.070 07:14:16 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:43.070 07:14:16 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 115959' 00:13:43.070 07:14:16 -- common/autotest_common.sh@943 -- # kill 115959 00:13:43.070 07:14:16 -- common/autotest_common.sh@948 -- # wait 115959 00:13:43.070 [2024-02-13 07:14:16.621816] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:43.070 [2024-02-13 07:14:16.621916] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.070 [2024-02-13 07:14:16.621972] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.070 [2024-02-13 07:14:16.622000] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name raid, state offline 00:13:43.328 [2024-02-13 07:14:16.776179] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.263 07:14:17 -- bdev/bdev_raid.sh@113 -- # return 0 00:13:44.263 00:13:44.263 real 0m4.266s 00:13:44.263 user 0m5.418s 00:13:44.263 sys 0m0.949s 00:13:44.263 07:14:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:44.263 ************************************ 00:13:44.263 END TEST raid_function_test_concat 00:13:44.263 ************************************ 00:13:44.263 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:13:44.263 07:14:17 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:13:44.263 07:14:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:13:44.263 07:14:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:44.263 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:13:44.263 ************************************ 00:13:44.263 START TEST raid0_resize_test 00:13:44.263 ************************************ 00:13:44.263 07:14:17 -- common/autotest_common.sh@1102 -- # raid0_resize_test 00:13:44.263 07:14:17 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:13:44.263 07:14:17 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:13:44.263 07:14:17 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:13:44.263 07:14:17 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:13:44.263 07:14:17 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:13:44.263 07:14:17 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:13:44.263 07:14:17 -- bdev/bdev_raid.sh@301 -- # raid_pid=116129 00:13:44.263 Process raid pid: 116129 00:13:44.263 07:14:17 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 116129' 00:13:44.263 07:14:17 -- bdev/bdev_raid.sh@303 -- # waitforlisten 116129 /var/tmp/spdk-raid.sock 00:13:44.263 07:14:17 -- common/autotest_common.sh@817 -- # '[' -z 116129 ']' 00:13:44.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:44.263 07:14:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:44.263 07:14:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:44.263 07:14:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:44.263 07:14:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:44.263 07:14:17 -- common/autotest_common.sh@10 -- # set +x 00:13:44.263 07:14:17 -- bdev/bdev_raid.sh@300 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:44.522 [2024-02-13 07:14:17.955937] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:13:44.522 [2024-02-13 07:14:17.956353] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:44.522 [2024-02-13 07:14:18.125854] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.780 [2024-02-13 07:14:18.308018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.039 [2024-02-13 07:14:18.501244] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.297 07:14:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:45.297 07:14:18 -- common/autotest_common.sh@850 -- # return 0 00:13:45.297 07:14:18 -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:13:45.556 Base_1 00:13:45.556 07:14:19 -- bdev/bdev_raid.sh@306 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:13:45.556 Base_2 00:13:45.556 07:14:19 -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:13:45.814 [2024-02-13 07:14:19.423195] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:13:45.814 [2024-02-13 07:14:19.425393] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:13:45.814 [2024-02-13 07:14:19.425469] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:13:45.814 [2024-02-13 07:14:19.425494] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:45.814 [2024-02-13 07:14:19.425633] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005380 00:13:45.814 [2024-02-13 07:14:19.425938] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:13:45.814 [2024-02-13 07:14:19.425959] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x616000007280 00:13:45.814 [2024-02-13 07:14:19.426113] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.814 07:14:19 -- bdev/bdev_raid.sh@311 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:13:46.073 [2024-02-13 07:14:19.639212] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:46.073 [2024-02-13 07:14:19.639237] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:13:46.073 true 00:13:46.073 07:14:19 -- bdev/bdev_raid.sh@314 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:46.073 07:14:19 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:13:46.331 [2024-02-13 07:14:19.895398] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.331 07:14:19 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:13:46.331 07:14:19 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:13:46.331 07:14:19 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:13:46.331 07:14:19 -- bdev/bdev_raid.sh@322 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:13:46.590 [2024-02-13 07:14:20.151291] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:13:46.590 [2024-02-13 07:14:20.151316] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:13:46.590 [2024-02-13 07:14:20.151367] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:13:46.590 [2024-02-13 07:14:20.151425] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:46.590 true 00:13:46.590 07:14:20 -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:13:46.590 07:14:20 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:13:46.848 [2024-02-13 07:14:20.351470] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.848 07:14:20 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:13:46.849 07:14:20 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:13:46.849 07:14:20 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:13:46.849 07:14:20 -- bdev/bdev_raid.sh@332 -- # killprocess 116129 00:13:46.849 07:14:20 -- common/autotest_common.sh@924 -- # '[' -z 116129 ']' 00:13:46.849 07:14:20 -- common/autotest_common.sh@928 -- # kill -0 116129 00:13:46.849 07:14:20 -- common/autotest_common.sh@929 -- # uname 00:13:46.849 07:14:20 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:46.849 07:14:20 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 116129 00:13:46.849 07:14:20 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:46.849 killing process with pid 116129 00:13:46.849 07:14:20 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:46.849 07:14:20 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 116129' 00:13:46.849 07:14:20 -- common/autotest_common.sh@943 -- # kill 116129 00:13:46.849 07:14:20 -- common/autotest_common.sh@948 -- # wait 116129 00:13:46.849 [2024-02-13 07:14:20.384025] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:46.849 [2024-02-13 07:14:20.384092] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.849 [2024-02-13 07:14:20.384131] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.849 [2024-02-13 07:14:20.384147] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Raid, state offline 00:13:46.849 [2024-02-13 07:14:20.384662] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:47.786 07:14:21 -- bdev/bdev_raid.sh@334 -- # return 0 00:13:47.786 00:13:47.786 real 0m3.556s 00:13:47.786 user 0m4.989s 00:13:47.786 sys 0m0.523s 00:13:47.786 ************************************ 00:13:47.786 END TEST raid0_resize_test 00:13:47.786 ************************************ 00:13:47.786 07:14:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:47.786 07:14:21 -- common/autotest_common.sh@10 -- # set +x 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:13:48.045 07:14:21 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:13:48.045 07:14:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:48.045 07:14:21 -- common/autotest_common.sh@10 -- # set +x 00:13:48.045 ************************************ 00:13:48.045 START TEST raid_state_function_test 00:13:48.045 ************************************ 00:13:48.045 07:14:21 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid0 2 false 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@226 -- # raid_pid=116211 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116211' 00:13:48.045 Process raid pid: 116211 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116211 /var/tmp/spdk-raid.sock 00:13:48.045 07:14:21 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:48.045 07:14:21 -- common/autotest_common.sh@817 -- # '[' -z 116211 ']' 00:13:48.045 07:14:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:48.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:48.045 07:14:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:48.045 07:14:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:48.045 07:14:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:48.045 07:14:21 -- common/autotest_common.sh@10 -- # set +x 00:13:48.045 [2024-02-13 07:14:21.572993] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:13:48.045 [2024-02-13 07:14:21.573202] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.045 [2024-02-13 07:14:21.731268] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.305 [2024-02-13 07:14:21.938969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.563 [2024-02-13 07:14:22.137883] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.130 07:14:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:49.130 07:14:22 -- common/autotest_common.sh@850 -- # return 0 00:13:49.130 07:14:22 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:49.130 [2024-02-13 07:14:22.789130] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:49.130 [2024-02-13 07:14:22.789235] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:49.130 [2024-02-13 07:14:22.789250] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:49.130 [2024-02-13 07:14:22.789270] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:49.131 07:14:22 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:49.131 07:14:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:49.131 07:14:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:49.131 07:14:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:49.131 07:14:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:49.131 07:14:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:49.131 07:14:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:49.131 07:14:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:49.131 07:14:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:49.131 07:14:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:49.131 07:14:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.131 07:14:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.389 07:14:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:49.389 "name": "Existed_Raid", 00:13:49.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.389 "strip_size_kb": 64, 00:13:49.389 "state": "configuring", 00:13:49.389 "raid_level": "raid0", 00:13:49.389 "superblock": false, 00:13:49.389 "num_base_bdevs": 2, 00:13:49.389 "num_base_bdevs_discovered": 0, 00:13:49.389 "num_base_bdevs_operational": 2, 00:13:49.389 "base_bdevs_list": [ 00:13:49.389 { 00:13:49.389 "name": "BaseBdev1", 00:13:49.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.389 "is_configured": false, 00:13:49.389 "data_offset": 0, 00:13:49.389 "data_size": 0 00:13:49.389 }, 00:13:49.389 { 00:13:49.389 "name": "BaseBdev2", 00:13:49.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.389 "is_configured": false, 00:13:49.389 "data_offset": 0, 00:13:49.389 "data_size": 0 00:13:49.389 } 00:13:49.389 ] 00:13:49.389 }' 00:13:49.389 07:14:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:49.389 07:14:23 -- common/autotest_common.sh@10 -- # set +x 00:13:50.325 07:14:23 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:50.325 [2024-02-13 07:14:24.009226] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:50.325 [2024-02-13 07:14:24.009265] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:13:50.584 07:14:24 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:50.584 [2024-02-13 07:14:24.217313] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:50.584 [2024-02-13 07:14:24.217434] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:50.584 [2024-02-13 07:14:24.217464] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:50.584 [2024-02-13 07:14:24.217489] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:50.584 07:14:24 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:50.843 [2024-02-13 07:14:24.475002] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.843 BaseBdev1 00:13:50.843 07:14:24 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:50.843 07:14:24 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:13:50.843 07:14:24 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:50.843 07:14:24 -- common/autotest_common.sh@887 -- # local i 00:13:50.843 07:14:24 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:50.843 07:14:24 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:50.843 07:14:24 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:51.103 07:14:24 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:51.362 [ 00:13:51.362 { 00:13:51.362 "name": "BaseBdev1", 00:13:51.362 "aliases": [ 00:13:51.362 "a738267d-cfd0-46c8-bcc3-5317fc4c1b95" 00:13:51.362 ], 00:13:51.362 "product_name": "Malloc disk", 00:13:51.362 "block_size": 512, 00:13:51.362 "num_blocks": 65536, 00:13:51.362 "uuid": "a738267d-cfd0-46c8-bcc3-5317fc4c1b95", 00:13:51.362 "assigned_rate_limits": { 00:13:51.362 "rw_ios_per_sec": 0, 00:13:51.362 "rw_mbytes_per_sec": 0, 00:13:51.362 "r_mbytes_per_sec": 0, 00:13:51.362 "w_mbytes_per_sec": 0 00:13:51.362 }, 00:13:51.362 "claimed": true, 00:13:51.362 "claim_type": "exclusive_write", 00:13:51.362 "zoned": false, 00:13:51.362 "supported_io_types": { 00:13:51.362 "read": true, 00:13:51.362 "write": true, 00:13:51.362 "unmap": true, 00:13:51.362 "write_zeroes": true, 00:13:51.362 "flush": true, 00:13:51.362 "reset": true, 00:13:51.362 "compare": false, 00:13:51.362 "compare_and_write": false, 00:13:51.362 "abort": true, 00:13:51.362 "nvme_admin": false, 00:13:51.362 "nvme_io": false 00:13:51.362 }, 00:13:51.362 "memory_domains": [ 00:13:51.362 { 00:13:51.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.362 "dma_device_type": 2 00:13:51.362 } 00:13:51.362 ], 00:13:51.362 "driver_specific": {} 00:13:51.362 } 00:13:51.362 ] 00:13:51.362 07:14:24 -- common/autotest_common.sh@893 -- # return 0 00:13:51.362 07:14:24 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:51.362 07:14:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:51.362 07:14:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:51.362 07:14:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:51.362 07:14:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:51.362 07:14:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:51.362 07:14:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:51.362 07:14:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:51.362 07:14:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:51.362 07:14:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:51.362 07:14:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.362 07:14:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.621 07:14:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:51.621 "name": "Existed_Raid", 00:13:51.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.621 "strip_size_kb": 64, 00:13:51.621 "state": "configuring", 00:13:51.621 "raid_level": "raid0", 00:13:51.621 "superblock": false, 00:13:51.621 "num_base_bdevs": 2, 00:13:51.621 "num_base_bdevs_discovered": 1, 00:13:51.621 "num_base_bdevs_operational": 2, 00:13:51.621 "base_bdevs_list": [ 00:13:51.621 { 00:13:51.621 "name": "BaseBdev1", 00:13:51.621 "uuid": "a738267d-cfd0-46c8-bcc3-5317fc4c1b95", 00:13:51.621 "is_configured": true, 00:13:51.621 "data_offset": 0, 00:13:51.621 "data_size": 65536 00:13:51.621 }, 00:13:51.621 { 00:13:51.621 "name": "BaseBdev2", 00:13:51.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.621 "is_configured": false, 00:13:51.621 "data_offset": 0, 00:13:51.621 "data_size": 0 00:13:51.621 } 00:13:51.621 ] 00:13:51.621 }' 00:13:51.621 07:14:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:51.621 07:14:25 -- common/autotest_common.sh@10 -- # set +x 00:13:52.210 07:14:25 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:52.469 [2024-02-13 07:14:26.111381] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:52.469 [2024-02-13 07:14:26.111450] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:13:52.469 07:14:26 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:13:52.469 07:14:26 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:52.728 [2024-02-13 07:14:26.323485] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.728 [2024-02-13 07:14:26.325674] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:52.728 [2024-02-13 07:14:26.325765] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:52.728 07:14:26 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:52.728 07:14:26 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:52.728 07:14:26 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:52.728 07:14:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:52.728 07:14:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:52.728 07:14:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:52.728 07:14:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:52.728 07:14:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:52.728 07:14:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:52.728 07:14:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:52.728 07:14:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:52.728 07:14:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:52.728 07:14:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:52.728 07:14:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.987 07:14:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:52.987 "name": "Existed_Raid", 00:13:52.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.987 "strip_size_kb": 64, 00:13:52.987 "state": "configuring", 00:13:52.987 "raid_level": "raid0", 00:13:52.987 "superblock": false, 00:13:52.987 "num_base_bdevs": 2, 00:13:52.987 "num_base_bdevs_discovered": 1, 00:13:52.987 "num_base_bdevs_operational": 2, 00:13:52.987 "base_bdevs_list": [ 00:13:52.987 { 00:13:52.987 "name": "BaseBdev1", 00:13:52.987 "uuid": "a738267d-cfd0-46c8-bcc3-5317fc4c1b95", 00:13:52.987 "is_configured": true, 00:13:52.987 "data_offset": 0, 00:13:52.987 "data_size": 65536 00:13:52.987 }, 00:13:52.987 { 00:13:52.987 "name": "BaseBdev2", 00:13:52.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.987 "is_configured": false, 00:13:52.987 "data_offset": 0, 00:13:52.987 "data_size": 0 00:13:52.987 } 00:13:52.987 ] 00:13:52.987 }' 00:13:52.987 07:14:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:52.987 07:14:26 -- common/autotest_common.sh@10 -- # set +x 00:13:53.924 07:14:27 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:53.924 [2024-02-13 07:14:27.588941] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:53.924 [2024-02-13 07:14:27.589009] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:13:53.924 [2024-02-13 07:14:27.589030] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:53.924 [2024-02-13 07:14:27.589217] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:13:53.924 [2024-02-13 07:14:27.589603] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:13:53.924 [2024-02-13 07:14:27.589625] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:13:53.924 [2024-02-13 07:14:27.589955] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.924 BaseBdev2 00:13:53.924 07:14:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:53.924 07:14:27 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:13:53.924 07:14:27 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:13:53.924 07:14:27 -- common/autotest_common.sh@887 -- # local i 00:13:53.924 07:14:27 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:13:53.924 07:14:27 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:13:53.924 07:14:27 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:54.182 07:14:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:54.441 [ 00:13:54.441 { 00:13:54.441 "name": "BaseBdev2", 00:13:54.441 "aliases": [ 00:13:54.441 "b03599a1-8815-49e6-9a96-658fb6fdc418" 00:13:54.441 ], 00:13:54.441 "product_name": "Malloc disk", 00:13:54.441 "block_size": 512, 00:13:54.441 "num_blocks": 65536, 00:13:54.441 "uuid": "b03599a1-8815-49e6-9a96-658fb6fdc418", 00:13:54.441 "assigned_rate_limits": { 00:13:54.441 "rw_ios_per_sec": 0, 00:13:54.441 "rw_mbytes_per_sec": 0, 00:13:54.441 "r_mbytes_per_sec": 0, 00:13:54.441 "w_mbytes_per_sec": 0 00:13:54.441 }, 00:13:54.441 "claimed": true, 00:13:54.441 "claim_type": "exclusive_write", 00:13:54.441 "zoned": false, 00:13:54.441 "supported_io_types": { 00:13:54.441 "read": true, 00:13:54.441 "write": true, 00:13:54.441 "unmap": true, 00:13:54.441 "write_zeroes": true, 00:13:54.441 "flush": true, 00:13:54.441 "reset": true, 00:13:54.441 "compare": false, 00:13:54.441 "compare_and_write": false, 00:13:54.441 "abort": true, 00:13:54.441 "nvme_admin": false, 00:13:54.441 "nvme_io": false 00:13:54.441 }, 00:13:54.441 "memory_domains": [ 00:13:54.441 { 00:13:54.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:54.441 "dma_device_type": 2 00:13:54.441 } 00:13:54.441 ], 00:13:54.441 "driver_specific": {} 00:13:54.441 } 00:13:54.441 ] 00:13:54.442 07:14:28 -- common/autotest_common.sh@893 -- # return 0 00:13:54.442 07:14:28 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:54.442 07:14:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:54.442 07:14:28 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:13:54.442 07:14:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:54.442 07:14:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:54.442 07:14:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:54.442 07:14:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:54.442 07:14:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:54.442 07:14:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:54.442 07:14:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:54.442 07:14:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:54.442 07:14:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:54.442 07:14:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:54.442 07:14:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.700 07:14:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:54.700 "name": "Existed_Raid", 00:13:54.700 "uuid": "d016df6d-2beb-4d77-915d-474f629460cc", 00:13:54.700 "strip_size_kb": 64, 00:13:54.700 "state": "online", 00:13:54.700 "raid_level": "raid0", 00:13:54.700 "superblock": false, 00:13:54.700 "num_base_bdevs": 2, 00:13:54.700 "num_base_bdevs_discovered": 2, 00:13:54.700 "num_base_bdevs_operational": 2, 00:13:54.700 "base_bdevs_list": [ 00:13:54.700 { 00:13:54.700 "name": "BaseBdev1", 00:13:54.700 "uuid": "a738267d-cfd0-46c8-bcc3-5317fc4c1b95", 00:13:54.700 "is_configured": true, 00:13:54.700 "data_offset": 0, 00:13:54.700 "data_size": 65536 00:13:54.700 }, 00:13:54.700 { 00:13:54.700 "name": "BaseBdev2", 00:13:54.700 "uuid": "b03599a1-8815-49e6-9a96-658fb6fdc418", 00:13:54.700 "is_configured": true, 00:13:54.700 "data_offset": 0, 00:13:54.700 "data_size": 65536 00:13:54.700 } 00:13:54.700 ] 00:13:54.700 }' 00:13:54.700 07:14:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:54.700 07:14:28 -- common/autotest_common.sh@10 -- # set +x 00:13:55.637 07:14:29 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:55.638 [2024-02-13 07:14:29.213465] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.638 [2024-02-13 07:14:29.213504] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:55.638 [2024-02-13 07:14:29.213608] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.638 07:14:29 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:55.638 07:14:29 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:13:55.638 07:14:29 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:55.638 07:14:29 -- bdev/bdev_raid.sh@197 -- # return 1 00:13:55.638 07:14:29 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:13:55.638 07:14:29 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:13:55.638 07:14:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:55.638 07:14:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:13:55.638 07:14:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:55.638 07:14:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:55.638 07:14:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:55.638 07:14:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:55.638 07:14:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:55.638 07:14:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:55.638 07:14:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:55.638 07:14:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.638 07:14:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.897 07:14:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:55.897 "name": "Existed_Raid", 00:13:55.897 "uuid": "d016df6d-2beb-4d77-915d-474f629460cc", 00:13:55.897 "strip_size_kb": 64, 00:13:55.897 "state": "offline", 00:13:55.897 "raid_level": "raid0", 00:13:55.897 "superblock": false, 00:13:55.897 "num_base_bdevs": 2, 00:13:55.897 "num_base_bdevs_discovered": 1, 00:13:55.897 "num_base_bdevs_operational": 1, 00:13:55.897 "base_bdevs_list": [ 00:13:55.897 { 00:13:55.897 "name": null, 00:13:55.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.897 "is_configured": false, 00:13:55.897 "data_offset": 0, 00:13:55.897 "data_size": 65536 00:13:55.897 }, 00:13:55.897 { 00:13:55.897 "name": "BaseBdev2", 00:13:55.897 "uuid": "b03599a1-8815-49e6-9a96-658fb6fdc418", 00:13:55.897 "is_configured": true, 00:13:55.897 "data_offset": 0, 00:13:55.897 "data_size": 65536 00:13:55.897 } 00:13:55.897 ] 00:13:55.897 }' 00:13:55.897 07:14:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:55.897 07:14:29 -- common/autotest_common.sh@10 -- # set +x 00:13:56.834 07:14:30 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:56.834 07:14:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:56.834 07:14:30 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.834 07:14:30 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:56.834 07:14:30 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:56.834 07:14:30 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:56.834 07:14:30 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:57.093 [2024-02-13 07:14:30.667037] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:57.093 [2024-02-13 07:14:30.667210] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:13:57.352 07:14:30 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:57.352 07:14:30 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:57.353 07:14:30 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.353 07:14:30 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:57.353 07:14:30 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:57.353 07:14:30 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:57.353 07:14:30 -- bdev/bdev_raid.sh@287 -- # killprocess 116211 00:13:57.353 07:14:30 -- common/autotest_common.sh@924 -- # '[' -z 116211 ']' 00:13:57.353 07:14:30 -- common/autotest_common.sh@928 -- # kill -0 116211 00:13:57.353 07:14:30 -- common/autotest_common.sh@929 -- # uname 00:13:57.353 07:14:30 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:13:57.353 07:14:30 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 116211 00:13:57.353 killing process with pid 116211 00:13:57.353 07:14:31 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:13:57.353 07:14:31 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:13:57.353 07:14:31 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 116211' 00:13:57.353 07:14:31 -- common/autotest_common.sh@943 -- # kill 116211 00:13:57.353 07:14:31 -- common/autotest_common.sh@948 -- # wait 116211 00:13:57.353 [2024-02-13 07:14:31.020224] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:57.353 [2024-02-13 07:14:31.020389] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:58.730 ************************************ 00:13:58.730 END TEST raid_state_function_test 00:13:58.730 ************************************ 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:58.730 00:13:58.730 real 0m10.565s 00:13:58.730 user 0m18.591s 00:13:58.730 sys 0m1.176s 00:13:58.730 07:14:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:58.730 07:14:32 -- common/autotest_common.sh@10 -- # set +x 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:13:58.730 07:14:32 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:13:58.730 07:14:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:13:58.730 07:14:32 -- common/autotest_common.sh@10 -- # set +x 00:13:58.730 ************************************ 00:13:58.730 START TEST raid_state_function_test_sb 00:13:58.730 ************************************ 00:13:58.730 07:14:32 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid0 2 true 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@226 -- # raid_pid=116558 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 116558' 00:13:58.730 Process raid pid: 116558 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@228 -- # waitforlisten 116558 /var/tmp/spdk-raid.sock 00:13:58.730 07:14:32 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:58.730 07:14:32 -- common/autotest_common.sh@817 -- # '[' -z 116558 ']' 00:13:58.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:58.730 07:14:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:58.730 07:14:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:58.730 07:14:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:58.730 07:14:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:58.730 07:14:32 -- common/autotest_common.sh@10 -- # set +x 00:13:58.730 [2024-02-13 07:14:32.192021] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:13:58.730 [2024-02-13 07:14:32.192235] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.730 [2024-02-13 07:14:32.354494] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.989 [2024-02-13 07:14:32.578895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.248 [2024-02-13 07:14:32.769744] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.507 07:14:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:59.507 07:14:33 -- common/autotest_common.sh@850 -- # return 0 00:13:59.507 07:14:33 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:13:59.767 [2024-02-13 07:14:33.279943] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:59.767 [2024-02-13 07:14:33.280056] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:59.767 [2024-02-13 07:14:33.280094] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:59.767 [2024-02-13 07:14:33.280114] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:59.767 07:14:33 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:13:59.767 07:14:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:59.767 07:14:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:59.767 07:14:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:13:59.767 07:14:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:13:59.767 07:14:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:59.767 07:14:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:59.767 07:14:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:59.767 07:14:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:59.767 07:14:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:59.767 07:14:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.767 07:14:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:00.026 07:14:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:00.026 "name": "Existed_Raid", 00:14:00.026 "uuid": "33108fbd-ec31-43b7-975f-841f3d92ecb9", 00:14:00.026 "strip_size_kb": 64, 00:14:00.026 "state": "configuring", 00:14:00.026 "raid_level": "raid0", 00:14:00.026 "superblock": true, 00:14:00.026 "num_base_bdevs": 2, 00:14:00.026 "num_base_bdevs_discovered": 0, 00:14:00.026 "num_base_bdevs_operational": 2, 00:14:00.026 "base_bdevs_list": [ 00:14:00.026 { 00:14:00.026 "name": "BaseBdev1", 00:14:00.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.026 "is_configured": false, 00:14:00.027 "data_offset": 0, 00:14:00.027 "data_size": 0 00:14:00.027 }, 00:14:00.027 { 00:14:00.027 "name": "BaseBdev2", 00:14:00.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.027 "is_configured": false, 00:14:00.027 "data_offset": 0, 00:14:00.027 "data_size": 0 00:14:00.027 } 00:14:00.027 ] 00:14:00.027 }' 00:14:00.027 07:14:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:00.027 07:14:33 -- common/autotest_common.sh@10 -- # set +x 00:14:00.595 07:14:34 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:00.854 [2024-02-13 07:14:34.476004] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:00.854 [2024-02-13 07:14:34.476040] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:00.854 07:14:34 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:01.114 [2024-02-13 07:14:34.724100] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:01.114 [2024-02-13 07:14:34.724170] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:01.114 [2024-02-13 07:14:34.724200] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:01.114 [2024-02-13 07:14:34.724223] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:01.114 07:14:34 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:01.373 [2024-02-13 07:14:34.954835] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:01.373 BaseBdev1 00:14:01.373 07:14:34 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:01.373 07:14:34 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:14:01.373 07:14:34 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:01.373 07:14:34 -- common/autotest_common.sh@887 -- # local i 00:14:01.373 07:14:34 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:01.373 07:14:34 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:01.373 07:14:34 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:01.633 07:14:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:01.892 [ 00:14:01.892 { 00:14:01.892 "name": "BaseBdev1", 00:14:01.892 "aliases": [ 00:14:01.892 "01d499ad-807a-426f-9cac-e4f0b4bc3315" 00:14:01.892 ], 00:14:01.892 "product_name": "Malloc disk", 00:14:01.892 "block_size": 512, 00:14:01.892 "num_blocks": 65536, 00:14:01.892 "uuid": "01d499ad-807a-426f-9cac-e4f0b4bc3315", 00:14:01.892 "assigned_rate_limits": { 00:14:01.892 "rw_ios_per_sec": 0, 00:14:01.892 "rw_mbytes_per_sec": 0, 00:14:01.892 "r_mbytes_per_sec": 0, 00:14:01.892 "w_mbytes_per_sec": 0 00:14:01.892 }, 00:14:01.892 "claimed": true, 00:14:01.892 "claim_type": "exclusive_write", 00:14:01.892 "zoned": false, 00:14:01.892 "supported_io_types": { 00:14:01.892 "read": true, 00:14:01.892 "write": true, 00:14:01.892 "unmap": true, 00:14:01.892 "write_zeroes": true, 00:14:01.892 "flush": true, 00:14:01.892 "reset": true, 00:14:01.892 "compare": false, 00:14:01.892 "compare_and_write": false, 00:14:01.892 "abort": true, 00:14:01.892 "nvme_admin": false, 00:14:01.892 "nvme_io": false 00:14:01.892 }, 00:14:01.892 "memory_domains": [ 00:14:01.892 { 00:14:01.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:01.892 "dma_device_type": 2 00:14:01.892 } 00:14:01.892 ], 00:14:01.892 "driver_specific": {} 00:14:01.892 } 00:14:01.892 ] 00:14:01.892 07:14:35 -- common/autotest_common.sh@893 -- # return 0 00:14:01.892 07:14:35 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:01.892 07:14:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:01.892 07:14:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:01.892 07:14:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:01.892 07:14:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:01.892 07:14:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:01.892 07:14:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:01.892 07:14:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:01.892 07:14:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:01.892 07:14:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:01.892 07:14:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.892 07:14:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:01.892 07:14:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:01.892 "name": "Existed_Raid", 00:14:01.892 "uuid": "b67339f1-f58a-45c6-99db-ade077b6f96c", 00:14:01.892 "strip_size_kb": 64, 00:14:01.892 "state": "configuring", 00:14:01.892 "raid_level": "raid0", 00:14:01.892 "superblock": true, 00:14:01.892 "num_base_bdevs": 2, 00:14:01.892 "num_base_bdevs_discovered": 1, 00:14:01.892 "num_base_bdevs_operational": 2, 00:14:01.892 "base_bdevs_list": [ 00:14:01.892 { 00:14:01.892 "name": "BaseBdev1", 00:14:01.892 "uuid": "01d499ad-807a-426f-9cac-e4f0b4bc3315", 00:14:01.892 "is_configured": true, 00:14:01.892 "data_offset": 2048, 00:14:01.893 "data_size": 63488 00:14:01.893 }, 00:14:01.893 { 00:14:01.893 "name": "BaseBdev2", 00:14:01.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.893 "is_configured": false, 00:14:01.893 "data_offset": 0, 00:14:01.893 "data_size": 0 00:14:01.893 } 00:14:01.893 ] 00:14:01.893 }' 00:14:01.893 07:14:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:01.893 07:14:35 -- common/autotest_common.sh@10 -- # set +x 00:14:02.830 07:14:36 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:02.830 [2024-02-13 07:14:36.487167] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:02.830 [2024-02-13 07:14:36.487216] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:14:02.830 07:14:36 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:02.830 07:14:36 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:03.088 07:14:36 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:03.346 BaseBdev1 00:14:03.346 07:14:37 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:03.346 07:14:37 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:14:03.346 07:14:37 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:03.346 07:14:37 -- common/autotest_common.sh@887 -- # local i 00:14:03.346 07:14:37 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:03.346 07:14:37 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:03.346 07:14:37 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:03.605 07:14:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:03.863 [ 00:14:03.864 { 00:14:03.864 "name": "BaseBdev1", 00:14:03.864 "aliases": [ 00:14:03.864 "6c3f7a75-72a3-4c11-a534-f2450a369996" 00:14:03.864 ], 00:14:03.864 "product_name": "Malloc disk", 00:14:03.864 "block_size": 512, 00:14:03.864 "num_blocks": 65536, 00:14:03.864 "uuid": "6c3f7a75-72a3-4c11-a534-f2450a369996", 00:14:03.864 "assigned_rate_limits": { 00:14:03.864 "rw_ios_per_sec": 0, 00:14:03.864 "rw_mbytes_per_sec": 0, 00:14:03.864 "r_mbytes_per_sec": 0, 00:14:03.864 "w_mbytes_per_sec": 0 00:14:03.864 }, 00:14:03.864 "claimed": false, 00:14:03.864 "zoned": false, 00:14:03.864 "supported_io_types": { 00:14:03.864 "read": true, 00:14:03.864 "write": true, 00:14:03.864 "unmap": true, 00:14:03.864 "write_zeroes": true, 00:14:03.864 "flush": true, 00:14:03.864 "reset": true, 00:14:03.864 "compare": false, 00:14:03.864 "compare_and_write": false, 00:14:03.864 "abort": true, 00:14:03.864 "nvme_admin": false, 00:14:03.864 "nvme_io": false 00:14:03.864 }, 00:14:03.864 "memory_domains": [ 00:14:03.864 { 00:14:03.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:03.864 "dma_device_type": 2 00:14:03.864 } 00:14:03.864 ], 00:14:03.864 "driver_specific": {} 00:14:03.864 } 00:14:03.864 ] 00:14:03.864 07:14:37 -- common/autotest_common.sh@893 -- # return 0 00:14:03.864 07:14:37 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:04.123 [2024-02-13 07:14:37.580260] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.123 [2024-02-13 07:14:37.582126] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:04.123 [2024-02-13 07:14:37.582202] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:04.123 07:14:37 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:04.123 07:14:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:04.123 07:14:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:14:04.123 07:14:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:04.123 07:14:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:04.123 07:14:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:04.123 07:14:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:04.123 07:14:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:04.123 07:14:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:04.123 07:14:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:04.123 07:14:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:04.123 07:14:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:04.123 07:14:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.123 07:14:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:04.381 07:14:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:04.381 "name": "Existed_Raid", 00:14:04.381 "uuid": "c10fc7c7-32a9-4ad1-a572-5ba606ef31ec", 00:14:04.381 "strip_size_kb": 64, 00:14:04.381 "state": "configuring", 00:14:04.381 "raid_level": "raid0", 00:14:04.381 "superblock": true, 00:14:04.381 "num_base_bdevs": 2, 00:14:04.381 "num_base_bdevs_discovered": 1, 00:14:04.381 "num_base_bdevs_operational": 2, 00:14:04.381 "base_bdevs_list": [ 00:14:04.381 { 00:14:04.381 "name": "BaseBdev1", 00:14:04.381 "uuid": "6c3f7a75-72a3-4c11-a534-f2450a369996", 00:14:04.381 "is_configured": true, 00:14:04.381 "data_offset": 2048, 00:14:04.381 "data_size": 63488 00:14:04.381 }, 00:14:04.381 { 00:14:04.381 "name": "BaseBdev2", 00:14:04.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.381 "is_configured": false, 00:14:04.381 "data_offset": 0, 00:14:04.381 "data_size": 0 00:14:04.381 } 00:14:04.381 ] 00:14:04.381 }' 00:14:04.381 07:14:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:04.381 07:14:37 -- common/autotest_common.sh@10 -- # set +x 00:14:04.949 07:14:38 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:05.207 [2024-02-13 07:14:38.836847] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:05.207 [2024-02-13 07:14:38.837221] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:14:05.207 [2024-02-13 07:14:38.837240] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:05.207 [2024-02-13 07:14:38.837366] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:05.207 BaseBdev2 00:14:05.207 [2024-02-13 07:14:38.837787] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:14:05.207 [2024-02-13 07:14:38.837801] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:14:05.207 [2024-02-13 07:14:38.837949] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.207 07:14:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:05.207 07:14:38 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:14:05.207 07:14:38 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:05.207 07:14:38 -- common/autotest_common.sh@887 -- # local i 00:14:05.207 07:14:38 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:05.207 07:14:38 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:05.207 07:14:38 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:05.465 07:14:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:05.723 [ 00:14:05.723 { 00:14:05.723 "name": "BaseBdev2", 00:14:05.723 "aliases": [ 00:14:05.723 "e5ece3af-a32b-4ca4-a15a-d1809b9243b5" 00:14:05.723 ], 00:14:05.723 "product_name": "Malloc disk", 00:14:05.724 "block_size": 512, 00:14:05.724 "num_blocks": 65536, 00:14:05.724 "uuid": "e5ece3af-a32b-4ca4-a15a-d1809b9243b5", 00:14:05.724 "assigned_rate_limits": { 00:14:05.724 "rw_ios_per_sec": 0, 00:14:05.724 "rw_mbytes_per_sec": 0, 00:14:05.724 "r_mbytes_per_sec": 0, 00:14:05.724 "w_mbytes_per_sec": 0 00:14:05.724 }, 00:14:05.724 "claimed": true, 00:14:05.724 "claim_type": "exclusive_write", 00:14:05.724 "zoned": false, 00:14:05.724 "supported_io_types": { 00:14:05.724 "read": true, 00:14:05.724 "write": true, 00:14:05.724 "unmap": true, 00:14:05.724 "write_zeroes": true, 00:14:05.724 "flush": true, 00:14:05.724 "reset": true, 00:14:05.724 "compare": false, 00:14:05.724 "compare_and_write": false, 00:14:05.724 "abort": true, 00:14:05.724 "nvme_admin": false, 00:14:05.724 "nvme_io": false 00:14:05.724 }, 00:14:05.724 "memory_domains": [ 00:14:05.724 { 00:14:05.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:05.724 "dma_device_type": 2 00:14:05.724 } 00:14:05.724 ], 00:14:05.724 "driver_specific": {} 00:14:05.724 } 00:14:05.724 ] 00:14:05.724 07:14:39 -- common/autotest_common.sh@893 -- # return 0 00:14:05.724 07:14:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:05.724 07:14:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:05.724 07:14:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:14:05.724 07:14:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:05.724 07:14:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:05.724 07:14:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:05.724 07:14:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:05.724 07:14:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:05.724 07:14:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:05.724 07:14:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:05.724 07:14:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:05.724 07:14:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:05.724 07:14:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.724 07:14:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:05.981 07:14:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:05.981 "name": "Existed_Raid", 00:14:05.981 "uuid": "c10fc7c7-32a9-4ad1-a572-5ba606ef31ec", 00:14:05.981 "strip_size_kb": 64, 00:14:05.981 "state": "online", 00:14:05.981 "raid_level": "raid0", 00:14:05.981 "superblock": true, 00:14:05.981 "num_base_bdevs": 2, 00:14:05.981 "num_base_bdevs_discovered": 2, 00:14:05.981 "num_base_bdevs_operational": 2, 00:14:05.981 "base_bdevs_list": [ 00:14:05.981 { 00:14:05.981 "name": "BaseBdev1", 00:14:05.981 "uuid": "6c3f7a75-72a3-4c11-a534-f2450a369996", 00:14:05.981 "is_configured": true, 00:14:05.981 "data_offset": 2048, 00:14:05.981 "data_size": 63488 00:14:05.981 }, 00:14:05.982 { 00:14:05.982 "name": "BaseBdev2", 00:14:05.982 "uuid": "e5ece3af-a32b-4ca4-a15a-d1809b9243b5", 00:14:05.982 "is_configured": true, 00:14:05.982 "data_offset": 2048, 00:14:05.982 "data_size": 63488 00:14:05.982 } 00:14:05.982 ] 00:14:05.982 }' 00:14:05.982 07:14:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:05.982 07:14:39 -- common/autotest_common.sh@10 -- # set +x 00:14:06.562 07:14:40 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:06.843 [2024-02-13 07:14:40.393376] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:06.843 [2024-02-13 07:14:40.393427] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.843 [2024-02-13 07:14:40.393504] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.843 07:14:40 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:06.843 07:14:40 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:14:06.843 07:14:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:06.843 07:14:40 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:06.843 07:14:40 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:06.843 07:14:40 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:14:06.843 07:14:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:06.843 07:14:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:06.843 07:14:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:06.843 07:14:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:06.843 07:14:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:06.843 07:14:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:06.843 07:14:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:06.843 07:14:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:06.843 07:14:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:06.843 07:14:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.843 07:14:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.101 07:14:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:07.101 "name": "Existed_Raid", 00:14:07.101 "uuid": "c10fc7c7-32a9-4ad1-a572-5ba606ef31ec", 00:14:07.101 "strip_size_kb": 64, 00:14:07.102 "state": "offline", 00:14:07.102 "raid_level": "raid0", 00:14:07.102 "superblock": true, 00:14:07.102 "num_base_bdevs": 2, 00:14:07.102 "num_base_bdevs_discovered": 1, 00:14:07.102 "num_base_bdevs_operational": 1, 00:14:07.102 "base_bdevs_list": [ 00:14:07.102 { 00:14:07.102 "name": null, 00:14:07.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.102 "is_configured": false, 00:14:07.102 "data_offset": 2048, 00:14:07.102 "data_size": 63488 00:14:07.102 }, 00:14:07.102 { 00:14:07.102 "name": "BaseBdev2", 00:14:07.102 "uuid": "e5ece3af-a32b-4ca4-a15a-d1809b9243b5", 00:14:07.102 "is_configured": true, 00:14:07.102 "data_offset": 2048, 00:14:07.102 "data_size": 63488 00:14:07.102 } 00:14:07.102 ] 00:14:07.102 }' 00:14:07.102 07:14:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:07.102 07:14:40 -- common/autotest_common.sh@10 -- # set +x 00:14:08.038 07:14:41 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:08.038 07:14:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:08.038 07:14:41 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.038 07:14:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:08.038 07:14:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:08.038 07:14:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:08.038 07:14:41 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:08.296 [2024-02-13 07:14:41.839462] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:08.297 [2024-02-13 07:14:41.839572] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:14:08.297 07:14:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:08.297 07:14:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:08.297 07:14:41 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.297 07:14:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:08.556 07:14:42 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:08.556 07:14:42 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:08.556 07:14:42 -- bdev/bdev_raid.sh@287 -- # killprocess 116558 00:14:08.556 07:14:42 -- common/autotest_common.sh@924 -- # '[' -z 116558 ']' 00:14:08.556 07:14:42 -- common/autotest_common.sh@928 -- # kill -0 116558 00:14:08.556 07:14:42 -- common/autotest_common.sh@929 -- # uname 00:14:08.556 07:14:42 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:08.556 07:14:42 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 116558 00:14:08.556 killing process with pid 116558 00:14:08.556 07:14:42 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:14:08.556 07:14:42 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:14:08.556 07:14:42 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 116558' 00:14:08.556 07:14:42 -- common/autotest_common.sh@943 -- # kill 116558 00:14:08.556 07:14:42 -- common/autotest_common.sh@948 -- # wait 116558 00:14:08.556 [2024-02-13 07:14:42.239902] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:08.556 [2024-02-13 07:14:42.240101] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:09.933 ************************************ 00:14:09.933 END TEST raid_state_function_test_sb 00:14:09.933 ************************************ 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:09.933 00:14:09.933 real 0m11.075s 00:14:09.933 user 0m19.510s 00:14:09.933 sys 0m1.257s 00:14:09.933 07:14:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:09.933 07:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:14:09.933 07:14:43 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:14:09.933 07:14:43 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:09.933 07:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:09.933 ************************************ 00:14:09.933 START TEST raid_superblock_test 00:14:09.933 ************************************ 00:14:09.933 07:14:43 -- common/autotest_common.sh@1102 -- # raid_superblock_test raid0 2 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@357 -- # raid_pid=116915 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@358 -- # waitforlisten 116915 /var/tmp/spdk-raid.sock 00:14:09.933 07:14:43 -- common/autotest_common.sh@817 -- # '[' -z 116915 ']' 00:14:09.933 07:14:43 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:09.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:09.933 07:14:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:09.933 07:14:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:09.933 07:14:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:09.933 07:14:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:09.933 07:14:43 -- common/autotest_common.sh@10 -- # set +x 00:14:09.933 [2024-02-13 07:14:43.327972] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:14:09.933 [2024-02-13 07:14:43.328194] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116915 ] 00:14:09.933 [2024-02-13 07:14:43.497753] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.192 [2024-02-13 07:14:43.685328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:10.192 [2024-02-13 07:14:43.856829] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:10.759 07:14:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:10.759 07:14:44 -- common/autotest_common.sh@850 -- # return 0 00:14:10.759 07:14:44 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:10.760 07:14:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:10.760 07:14:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:10.760 07:14:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:10.760 07:14:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:10.760 07:14:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:10.760 07:14:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:10.760 07:14:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:10.760 07:14:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:11.018 malloc1 00:14:11.018 07:14:44 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:11.276 [2024-02-13 07:14:44.764743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:11.276 [2024-02-13 07:14:44.764863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.276 [2024-02-13 07:14:44.764897] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:14:11.276 [2024-02-13 07:14:44.764954] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.276 [2024-02-13 07:14:44.767441] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.276 [2024-02-13 07:14:44.767515] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:11.276 pt1 00:14:11.276 07:14:44 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:11.276 07:14:44 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:11.276 07:14:44 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:11.276 07:14:44 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:11.276 07:14:44 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:11.276 07:14:44 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:11.276 07:14:44 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:11.276 07:14:44 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:11.276 07:14:44 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:11.535 malloc2 00:14:11.535 07:14:45 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:11.796 [2024-02-13 07:14:45.232750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:11.796 [2024-02-13 07:14:45.232862] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.796 [2024-02-13 07:14:45.232906] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:14:11.796 [2024-02-13 07:14:45.232988] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.796 [2024-02-13 07:14:45.235403] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.796 [2024-02-13 07:14:45.235468] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:11.796 pt2 00:14:11.796 07:14:45 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:11.796 07:14:45 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:11.796 07:14:45 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:14:11.796 [2024-02-13 07:14:45.452856] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:11.796 [2024-02-13 07:14:45.454885] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:11.796 [2024-02-13 07:14:45.455084] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:14:11.796 [2024-02-13 07:14:45.455099] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:11.796 [2024-02-13 07:14:45.455283] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:11.796 [2024-02-13 07:14:45.455672] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:14:11.796 [2024-02-13 07:14:45.455696] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:14:11.796 [2024-02-13 07:14:45.455872] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.796 07:14:45 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:11.796 07:14:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:11.796 07:14:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:11.796 07:14:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:11.796 07:14:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:11.796 07:14:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:11.796 07:14:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:11.796 07:14:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:11.796 07:14:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:11.796 07:14:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:11.796 07:14:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.796 07:14:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.055 07:14:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:12.055 "name": "raid_bdev1", 00:14:12.055 "uuid": "0fa2d379-5be4-4283-a228-06a8122e3949", 00:14:12.055 "strip_size_kb": 64, 00:14:12.055 "state": "online", 00:14:12.055 "raid_level": "raid0", 00:14:12.055 "superblock": true, 00:14:12.055 "num_base_bdevs": 2, 00:14:12.055 "num_base_bdevs_discovered": 2, 00:14:12.055 "num_base_bdevs_operational": 2, 00:14:12.055 "base_bdevs_list": [ 00:14:12.055 { 00:14:12.055 "name": "pt1", 00:14:12.055 "uuid": "d10a0e21-a88a-5066-b3fe-91d84948fea0", 00:14:12.055 "is_configured": true, 00:14:12.055 "data_offset": 2048, 00:14:12.055 "data_size": 63488 00:14:12.055 }, 00:14:12.055 { 00:14:12.055 "name": "pt2", 00:14:12.055 "uuid": "55ed27b5-53c1-533d-991e-d0a6e276f75b", 00:14:12.055 "is_configured": true, 00:14:12.055 "data_offset": 2048, 00:14:12.055 "data_size": 63488 00:14:12.055 } 00:14:12.055 ] 00:14:12.055 }' 00:14:12.055 07:14:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:12.055 07:14:45 -- common/autotest_common.sh@10 -- # set +x 00:14:12.991 07:14:46 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:12.991 07:14:46 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:12.991 [2024-02-13 07:14:46.605441] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:12.991 07:14:46 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=0fa2d379-5be4-4283-a228-06a8122e3949 00:14:12.991 07:14:46 -- bdev/bdev_raid.sh@380 -- # '[' -z 0fa2d379-5be4-4283-a228-06a8122e3949 ']' 00:14:12.991 07:14:46 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:13.249 [2024-02-13 07:14:46.857219] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:13.249 [2024-02-13 07:14:46.857248] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:13.249 [2024-02-13 07:14:46.857355] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:13.249 [2024-02-13 07:14:46.857411] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:13.249 [2024-02-13 07:14:46.857422] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:14:13.249 07:14:46 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.249 07:14:46 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:13.508 07:14:47 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:13.508 07:14:47 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:13.508 07:14:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:13.508 07:14:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:13.767 07:14:47 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:13.767 07:14:47 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:14.025 07:14:47 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:14.025 07:14:47 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:14.284 07:14:47 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:14.284 07:14:47 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:14.284 07:14:47 -- common/autotest_common.sh@638 -- # local es=0 00:14:14.284 07:14:47 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:14.284 07:14:47 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:14.284 07:14:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:14.284 07:14:47 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:14.284 07:14:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:14.284 07:14:47 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:14.284 07:14:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:14.284 07:14:47 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:14.284 07:14:47 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:14.284 07:14:47 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:14:14.542 [2024-02-13 07:14:48.005508] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:14.542 [2024-02-13 07:14:48.007208] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:14.542 [2024-02-13 07:14:48.007282] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:14.542 [2024-02-13 07:14:48.007366] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:14.542 [2024-02-13 07:14:48.007399] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:14.543 [2024-02-13 07:14:48.007410] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:14:14.543 request: 00:14:14.543 { 00:14:14.543 "name": "raid_bdev1", 00:14:14.543 "raid_level": "raid0", 00:14:14.543 "base_bdevs": [ 00:14:14.543 "malloc1", 00:14:14.543 "malloc2" 00:14:14.543 ], 00:14:14.543 "superblock": false, 00:14:14.543 "strip_size_kb": 64, 00:14:14.543 "method": "bdev_raid_create", 00:14:14.543 "req_id": 1 00:14:14.543 } 00:14:14.543 Got JSON-RPC error response 00:14:14.543 response: 00:14:14.543 { 00:14:14.543 "code": -17, 00:14:14.543 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:14.543 } 00:14:14.543 07:14:48 -- common/autotest_common.sh@641 -- # es=1 00:14:14.543 07:14:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:14.543 07:14:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:14.543 07:14:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:14.543 07:14:48 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.543 07:14:48 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:14.543 07:14:48 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:14.543 07:14:48 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:14.543 07:14:48 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:14.801 [2024-02-13 07:14:48.421611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:14.801 [2024-02-13 07:14:48.421707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.801 [2024-02-13 07:14:48.421742] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:14.801 [2024-02-13 07:14:48.421766] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.801 [2024-02-13 07:14:48.423715] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.801 [2024-02-13 07:14:48.423763] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:14.801 [2024-02-13 07:14:48.423863] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:14.801 [2024-02-13 07:14:48.423920] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:14.801 pt1 00:14:14.802 07:14:48 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:14:14.802 07:14:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:14.802 07:14:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:14.802 07:14:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:14.802 07:14:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:14.802 07:14:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:14.802 07:14:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:14.802 07:14:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:14.802 07:14:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:14.802 07:14:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:14.802 07:14:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.802 07:14:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.060 07:14:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:15.060 "name": "raid_bdev1", 00:14:15.060 "uuid": "0fa2d379-5be4-4283-a228-06a8122e3949", 00:14:15.060 "strip_size_kb": 64, 00:14:15.060 "state": "configuring", 00:14:15.060 "raid_level": "raid0", 00:14:15.060 "superblock": true, 00:14:15.060 "num_base_bdevs": 2, 00:14:15.060 "num_base_bdevs_discovered": 1, 00:14:15.060 "num_base_bdevs_operational": 2, 00:14:15.060 "base_bdevs_list": [ 00:14:15.060 { 00:14:15.060 "name": "pt1", 00:14:15.060 "uuid": "d10a0e21-a88a-5066-b3fe-91d84948fea0", 00:14:15.060 "is_configured": true, 00:14:15.060 "data_offset": 2048, 00:14:15.060 "data_size": 63488 00:14:15.060 }, 00:14:15.060 { 00:14:15.060 "name": null, 00:14:15.060 "uuid": "55ed27b5-53c1-533d-991e-d0a6e276f75b", 00:14:15.060 "is_configured": false, 00:14:15.060 "data_offset": 2048, 00:14:15.060 "data_size": 63488 00:14:15.060 } 00:14:15.060 ] 00:14:15.060 }' 00:14:15.060 07:14:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:15.060 07:14:48 -- common/autotest_common.sh@10 -- # set +x 00:14:15.627 07:14:49 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:15.627 07:14:49 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:15.627 07:14:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:15.627 07:14:49 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:15.886 [2024-02-13 07:14:49.541867] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:15.886 [2024-02-13 07:14:49.541986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.886 [2024-02-13 07:14:49.542024] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:15.886 [2024-02-13 07:14:49.542050] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.886 [2024-02-13 07:14:49.542580] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.886 [2024-02-13 07:14:49.542639] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:15.886 [2024-02-13 07:14:49.542758] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:15.886 [2024-02-13 07:14:49.542786] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:15.886 [2024-02-13 07:14:49.542923] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:14:15.886 [2024-02-13 07:14:49.542936] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:15.886 [2024-02-13 07:14:49.543081] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:14:15.886 [2024-02-13 07:14:49.543434] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:14:15.886 [2024-02-13 07:14:49.543465] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:14:15.886 [2024-02-13 07:14:49.543655] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.886 pt2 00:14:15.886 07:14:49 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:15.886 07:14:49 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:15.886 07:14:49 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:14:15.886 07:14:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:15.886 07:14:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:15.886 07:14:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:14:15.886 07:14:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:15.886 07:14:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:15.886 07:14:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:15.886 07:14:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:15.886 07:14:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:15.886 07:14:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:15.886 07:14:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.886 07:14:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.145 07:14:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:16.145 "name": "raid_bdev1", 00:14:16.145 "uuid": "0fa2d379-5be4-4283-a228-06a8122e3949", 00:14:16.145 "strip_size_kb": 64, 00:14:16.145 "state": "online", 00:14:16.145 "raid_level": "raid0", 00:14:16.145 "superblock": true, 00:14:16.145 "num_base_bdevs": 2, 00:14:16.145 "num_base_bdevs_discovered": 2, 00:14:16.145 "num_base_bdevs_operational": 2, 00:14:16.145 "base_bdevs_list": [ 00:14:16.145 { 00:14:16.145 "name": "pt1", 00:14:16.145 "uuid": "d10a0e21-a88a-5066-b3fe-91d84948fea0", 00:14:16.145 "is_configured": true, 00:14:16.145 "data_offset": 2048, 00:14:16.145 "data_size": 63488 00:14:16.145 }, 00:14:16.145 { 00:14:16.145 "name": "pt2", 00:14:16.145 "uuid": "55ed27b5-53c1-533d-991e-d0a6e276f75b", 00:14:16.145 "is_configured": true, 00:14:16.145 "data_offset": 2048, 00:14:16.145 "data_size": 63488 00:14:16.145 } 00:14:16.145 ] 00:14:16.145 }' 00:14:16.145 07:14:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:16.145 07:14:49 -- common/autotest_common.sh@10 -- # set +x 00:14:17.083 07:14:50 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:17.083 07:14:50 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:17.083 [2024-02-13 07:14:50.662360] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:17.083 07:14:50 -- bdev/bdev_raid.sh@430 -- # '[' 0fa2d379-5be4-4283-a228-06a8122e3949 '!=' 0fa2d379-5be4-4283-a228-06a8122e3949 ']' 00:14:17.083 07:14:50 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:14:17.083 07:14:50 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:17.083 07:14:50 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:17.083 07:14:50 -- bdev/bdev_raid.sh@511 -- # killprocess 116915 00:14:17.083 07:14:50 -- common/autotest_common.sh@924 -- # '[' -z 116915 ']' 00:14:17.083 07:14:50 -- common/autotest_common.sh@928 -- # kill -0 116915 00:14:17.083 07:14:50 -- common/autotest_common.sh@929 -- # uname 00:14:17.083 07:14:50 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:17.083 07:14:50 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 116915 00:14:17.083 killing process with pid 116915 00:14:17.083 07:14:50 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:14:17.083 07:14:50 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:14:17.083 07:14:50 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 116915' 00:14:17.083 07:14:50 -- common/autotest_common.sh@943 -- # kill 116915 00:14:17.083 07:14:50 -- common/autotest_common.sh@948 -- # wait 116915 00:14:17.083 [2024-02-13 07:14:50.701009] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:17.083 [2024-02-13 07:14:50.701128] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.083 [2024-02-13 07:14:50.701197] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:17.083 [2024-02-13 07:14:50.701217] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:14:17.342 [2024-02-13 07:14:50.849238] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.279 ************************************ 00:14:18.279 END TEST raid_superblock_test 00:14:18.279 ************************************ 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:18.279 00:14:18.279 real 0m8.590s 00:14:18.279 user 0m14.760s 00:14:18.279 sys 0m1.075s 00:14:18.279 07:14:51 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:18.279 07:14:51 -- common/autotest_common.sh@10 -- # set +x 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:14:18.279 07:14:51 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:14:18.279 07:14:51 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:18.279 07:14:51 -- common/autotest_common.sh@10 -- # set +x 00:14:18.279 ************************************ 00:14:18.279 START TEST raid_state_function_test 00:14:18.279 ************************************ 00:14:18.279 07:14:51 -- common/autotest_common.sh@1102 -- # raid_state_function_test concat 2 false 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@226 -- # raid_pid=117173 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:18.279 Process raid pid: 117173 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 117173' 00:14:18.279 07:14:51 -- bdev/bdev_raid.sh@228 -- # waitforlisten 117173 /var/tmp/spdk-raid.sock 00:14:18.279 07:14:51 -- common/autotest_common.sh@817 -- # '[' -z 117173 ']' 00:14:18.279 07:14:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:18.279 07:14:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:18.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:18.280 07:14:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:18.280 07:14:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:18.280 07:14:51 -- common/autotest_common.sh@10 -- # set +x 00:14:18.538 [2024-02-13 07:14:51.974746] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:14:18.538 [2024-02-13 07:14:51.974916] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.538 [2024-02-13 07:14:52.127744] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.797 [2024-02-13 07:14:52.317492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.056 [2024-02-13 07:14:52.508358] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.315 07:14:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:19.315 07:14:52 -- common/autotest_common.sh@850 -- # return 0 00:14:19.315 07:14:52 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:19.574 [2024-02-13 07:14:53.162457] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.574 [2024-02-13 07:14:53.162560] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.574 [2024-02-13 07:14:53.162574] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.574 [2024-02-13 07:14:53.162593] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.574 07:14:53 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:19.574 07:14:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:19.574 07:14:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:19.574 07:14:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:19.574 07:14:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:19.574 07:14:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:19.574 07:14:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:19.574 07:14:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:19.574 07:14:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:19.574 07:14:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:19.574 07:14:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.574 07:14:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.832 07:14:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:19.832 "name": "Existed_Raid", 00:14:19.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.832 "strip_size_kb": 64, 00:14:19.832 "state": "configuring", 00:14:19.832 "raid_level": "concat", 00:14:19.832 "superblock": false, 00:14:19.832 "num_base_bdevs": 2, 00:14:19.832 "num_base_bdevs_discovered": 0, 00:14:19.832 "num_base_bdevs_operational": 2, 00:14:19.832 "base_bdevs_list": [ 00:14:19.832 { 00:14:19.832 "name": "BaseBdev1", 00:14:19.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.832 "is_configured": false, 00:14:19.832 "data_offset": 0, 00:14:19.832 "data_size": 0 00:14:19.832 }, 00:14:19.832 { 00:14:19.832 "name": "BaseBdev2", 00:14:19.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.832 "is_configured": false, 00:14:19.832 "data_offset": 0, 00:14:19.832 "data_size": 0 00:14:19.832 } 00:14:19.832 ] 00:14:19.832 }' 00:14:19.832 07:14:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:19.832 07:14:53 -- common/autotest_common.sh@10 -- # set +x 00:14:20.399 07:14:54 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:20.658 [2024-02-13 07:14:54.283235] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:20.658 [2024-02-13 07:14:54.283302] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:20.658 07:14:54 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:20.916 [2024-02-13 07:14:54.531316] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:20.916 [2024-02-13 07:14:54.531433] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:20.916 [2024-02-13 07:14:54.531448] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:20.916 [2024-02-13 07:14:54.531473] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:20.916 07:14:54 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:21.174 [2024-02-13 07:14:54.776855] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:21.174 BaseBdev1 00:14:21.174 07:14:54 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:21.174 07:14:54 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:14:21.174 07:14:54 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:21.174 07:14:54 -- common/autotest_common.sh@887 -- # local i 00:14:21.174 07:14:54 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:21.174 07:14:54 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:21.174 07:14:54 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:21.432 07:14:54 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:21.691 [ 00:14:21.691 { 00:14:21.691 "name": "BaseBdev1", 00:14:21.691 "aliases": [ 00:14:21.691 "c70cefe2-9f23-41a4-8e96-71a99133a8a5" 00:14:21.691 ], 00:14:21.691 "product_name": "Malloc disk", 00:14:21.691 "block_size": 512, 00:14:21.691 "num_blocks": 65536, 00:14:21.691 "uuid": "c70cefe2-9f23-41a4-8e96-71a99133a8a5", 00:14:21.691 "assigned_rate_limits": { 00:14:21.691 "rw_ios_per_sec": 0, 00:14:21.691 "rw_mbytes_per_sec": 0, 00:14:21.691 "r_mbytes_per_sec": 0, 00:14:21.691 "w_mbytes_per_sec": 0 00:14:21.691 }, 00:14:21.691 "claimed": true, 00:14:21.691 "claim_type": "exclusive_write", 00:14:21.691 "zoned": false, 00:14:21.691 "supported_io_types": { 00:14:21.691 "read": true, 00:14:21.691 "write": true, 00:14:21.691 "unmap": true, 00:14:21.691 "write_zeroes": true, 00:14:21.691 "flush": true, 00:14:21.691 "reset": true, 00:14:21.691 "compare": false, 00:14:21.691 "compare_and_write": false, 00:14:21.691 "abort": true, 00:14:21.691 "nvme_admin": false, 00:14:21.691 "nvme_io": false 00:14:21.691 }, 00:14:21.691 "memory_domains": [ 00:14:21.691 { 00:14:21.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.691 "dma_device_type": 2 00:14:21.691 } 00:14:21.691 ], 00:14:21.691 "driver_specific": {} 00:14:21.691 } 00:14:21.691 ] 00:14:21.691 07:14:55 -- common/autotest_common.sh@893 -- # return 0 00:14:21.691 07:14:55 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:21.691 07:14:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:21.691 07:14:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:21.691 07:14:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:21.691 07:14:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:21.691 07:14:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:21.691 07:14:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:21.691 07:14:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:21.691 07:14:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:21.691 07:14:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:21.691 07:14:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.691 07:14:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.949 07:14:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:21.949 "name": "Existed_Raid", 00:14:21.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.949 "strip_size_kb": 64, 00:14:21.949 "state": "configuring", 00:14:21.949 "raid_level": "concat", 00:14:21.949 "superblock": false, 00:14:21.949 "num_base_bdevs": 2, 00:14:21.949 "num_base_bdevs_discovered": 1, 00:14:21.949 "num_base_bdevs_operational": 2, 00:14:21.949 "base_bdevs_list": [ 00:14:21.949 { 00:14:21.949 "name": "BaseBdev1", 00:14:21.949 "uuid": "c70cefe2-9f23-41a4-8e96-71a99133a8a5", 00:14:21.949 "is_configured": true, 00:14:21.949 "data_offset": 0, 00:14:21.949 "data_size": 65536 00:14:21.949 }, 00:14:21.949 { 00:14:21.949 "name": "BaseBdev2", 00:14:21.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.949 "is_configured": false, 00:14:21.949 "data_offset": 0, 00:14:21.949 "data_size": 0 00:14:21.949 } 00:14:21.950 ] 00:14:21.950 }' 00:14:21.950 07:14:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:21.950 07:14:55 -- common/autotest_common.sh@10 -- # set +x 00:14:22.514 07:14:56 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:22.772 [2024-02-13 07:14:56.289227] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:22.772 [2024-02-13 07:14:56.289294] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:14:22.772 07:14:56 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:22.772 07:14:56 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:23.030 [2024-02-13 07:14:56.537312] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.030 [2024-02-13 07:14:56.539401] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:23.030 [2024-02-13 07:14:56.539472] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:23.030 07:14:56 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:23.030 07:14:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:23.030 07:14:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:23.030 07:14:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:23.030 07:14:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:23.030 07:14:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:23.030 07:14:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:23.030 07:14:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:23.030 07:14:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:23.030 07:14:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:23.030 07:14:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:23.030 07:14:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:23.030 07:14:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.030 07:14:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.288 07:14:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:23.288 "name": "Existed_Raid", 00:14:23.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.288 "strip_size_kb": 64, 00:14:23.288 "state": "configuring", 00:14:23.288 "raid_level": "concat", 00:14:23.288 "superblock": false, 00:14:23.288 "num_base_bdevs": 2, 00:14:23.288 "num_base_bdevs_discovered": 1, 00:14:23.288 "num_base_bdevs_operational": 2, 00:14:23.288 "base_bdevs_list": [ 00:14:23.288 { 00:14:23.288 "name": "BaseBdev1", 00:14:23.288 "uuid": "c70cefe2-9f23-41a4-8e96-71a99133a8a5", 00:14:23.288 "is_configured": true, 00:14:23.288 "data_offset": 0, 00:14:23.288 "data_size": 65536 00:14:23.288 }, 00:14:23.288 { 00:14:23.288 "name": "BaseBdev2", 00:14:23.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.288 "is_configured": false, 00:14:23.288 "data_offset": 0, 00:14:23.288 "data_size": 0 00:14:23.288 } 00:14:23.288 ] 00:14:23.288 }' 00:14:23.288 07:14:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:23.288 07:14:56 -- common/autotest_common.sh@10 -- # set +x 00:14:23.854 07:14:57 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:24.151 [2024-02-13 07:14:57.733249] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.151 [2024-02-13 07:14:57.733312] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:24.151 [2024-02-13 07:14:57.733335] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:24.151 [2024-02-13 07:14:57.733479] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:14:24.151 [2024-02-13 07:14:57.733892] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:24.151 [2024-02-13 07:14:57.733915] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:14:24.151 [2024-02-13 07:14:57.734222] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.151 BaseBdev2 00:14:24.151 07:14:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:24.151 07:14:57 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:14:24.151 07:14:57 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:24.151 07:14:57 -- common/autotest_common.sh@887 -- # local i 00:14:24.151 07:14:57 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:24.151 07:14:57 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:24.151 07:14:57 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:24.411 07:14:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:24.670 [ 00:14:24.670 { 00:14:24.670 "name": "BaseBdev2", 00:14:24.670 "aliases": [ 00:14:24.670 "9e6ab28a-2ee9-472e-8fb4-5e8e2ee4703a" 00:14:24.670 ], 00:14:24.670 "product_name": "Malloc disk", 00:14:24.670 "block_size": 512, 00:14:24.670 "num_blocks": 65536, 00:14:24.670 "uuid": "9e6ab28a-2ee9-472e-8fb4-5e8e2ee4703a", 00:14:24.670 "assigned_rate_limits": { 00:14:24.670 "rw_ios_per_sec": 0, 00:14:24.670 "rw_mbytes_per_sec": 0, 00:14:24.670 "r_mbytes_per_sec": 0, 00:14:24.670 "w_mbytes_per_sec": 0 00:14:24.670 }, 00:14:24.670 "claimed": true, 00:14:24.670 "claim_type": "exclusive_write", 00:14:24.670 "zoned": false, 00:14:24.670 "supported_io_types": { 00:14:24.670 "read": true, 00:14:24.670 "write": true, 00:14:24.670 "unmap": true, 00:14:24.670 "write_zeroes": true, 00:14:24.670 "flush": true, 00:14:24.670 "reset": true, 00:14:24.670 "compare": false, 00:14:24.670 "compare_and_write": false, 00:14:24.670 "abort": true, 00:14:24.670 "nvme_admin": false, 00:14:24.670 "nvme_io": false 00:14:24.670 }, 00:14:24.670 "memory_domains": [ 00:14:24.670 { 00:14:24.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.670 "dma_device_type": 2 00:14:24.670 } 00:14:24.670 ], 00:14:24.670 "driver_specific": {} 00:14:24.670 } 00:14:24.670 ] 00:14:24.670 07:14:58 -- common/autotest_common.sh@893 -- # return 0 00:14:24.670 07:14:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:24.670 07:14:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:24.670 07:14:58 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:24.670 07:14:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:24.670 07:14:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:24.670 07:14:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:24.670 07:14:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:24.670 07:14:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:24.670 07:14:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:24.670 07:14:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:24.670 07:14:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:24.670 07:14:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:24.670 07:14:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.670 07:14:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.929 07:14:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:24.929 "name": "Existed_Raid", 00:14:24.929 "uuid": "473e2080-e731-4faf-9b8c-2d2227c0fb81", 00:14:24.929 "strip_size_kb": 64, 00:14:24.929 "state": "online", 00:14:24.929 "raid_level": "concat", 00:14:24.929 "superblock": false, 00:14:24.929 "num_base_bdevs": 2, 00:14:24.929 "num_base_bdevs_discovered": 2, 00:14:24.929 "num_base_bdevs_operational": 2, 00:14:24.929 "base_bdevs_list": [ 00:14:24.929 { 00:14:24.929 "name": "BaseBdev1", 00:14:24.929 "uuid": "c70cefe2-9f23-41a4-8e96-71a99133a8a5", 00:14:24.929 "is_configured": true, 00:14:24.929 "data_offset": 0, 00:14:24.929 "data_size": 65536 00:14:24.929 }, 00:14:24.929 { 00:14:24.929 "name": "BaseBdev2", 00:14:24.929 "uuid": "9e6ab28a-2ee9-472e-8fb4-5e8e2ee4703a", 00:14:24.929 "is_configured": true, 00:14:24.929 "data_offset": 0, 00:14:24.929 "data_size": 65536 00:14:24.929 } 00:14:24.929 ] 00:14:24.929 }' 00:14:24.929 07:14:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:24.929 07:14:58 -- common/autotest_common.sh@10 -- # set +x 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:25.866 [2024-02-13 07:14:59.393848] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.866 [2024-02-13 07:14:59.393888] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.866 [2024-02-13 07:14:59.393985] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.866 07:14:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.125 07:14:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:26.125 "name": "Existed_Raid", 00:14:26.125 "uuid": "473e2080-e731-4faf-9b8c-2d2227c0fb81", 00:14:26.125 "strip_size_kb": 64, 00:14:26.125 "state": "offline", 00:14:26.125 "raid_level": "concat", 00:14:26.125 "superblock": false, 00:14:26.125 "num_base_bdevs": 2, 00:14:26.125 "num_base_bdevs_discovered": 1, 00:14:26.125 "num_base_bdevs_operational": 1, 00:14:26.125 "base_bdevs_list": [ 00:14:26.125 { 00:14:26.125 "name": null, 00:14:26.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.125 "is_configured": false, 00:14:26.125 "data_offset": 0, 00:14:26.125 "data_size": 65536 00:14:26.125 }, 00:14:26.125 { 00:14:26.125 "name": "BaseBdev2", 00:14:26.125 "uuid": "9e6ab28a-2ee9-472e-8fb4-5e8e2ee4703a", 00:14:26.125 "is_configured": true, 00:14:26.125 "data_offset": 0, 00:14:26.125 "data_size": 65536 00:14:26.125 } 00:14:26.125 ] 00:14:26.125 }' 00:14:26.125 07:14:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:26.125 07:14:59 -- common/autotest_common.sh@10 -- # set +x 00:14:27.061 07:15:00 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:27.061 07:15:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:27.061 07:15:00 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.061 07:15:00 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:27.061 07:15:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:27.061 07:15:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:27.061 07:15:00 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:27.320 [2024-02-13 07:15:00.888668] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:27.320 [2024-02-13 07:15:00.888769] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:14:27.320 07:15:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:27.320 07:15:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:27.320 07:15:00 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.320 07:15:00 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:27.579 07:15:01 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:27.579 07:15:01 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:27.579 07:15:01 -- bdev/bdev_raid.sh@287 -- # killprocess 117173 00:14:27.579 07:15:01 -- common/autotest_common.sh@924 -- # '[' -z 117173 ']' 00:14:27.579 07:15:01 -- common/autotest_common.sh@928 -- # kill -0 117173 00:14:27.579 07:15:01 -- common/autotest_common.sh@929 -- # uname 00:14:27.579 07:15:01 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:27.579 07:15:01 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 117173 00:14:27.579 07:15:01 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:14:27.579 07:15:01 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:14:27.579 07:15:01 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 117173' 00:14:27.579 killing process with pid 117173 00:14:27.579 07:15:01 -- common/autotest_common.sh@943 -- # kill 117173 00:14:27.579 07:15:01 -- common/autotest_common.sh@948 -- # wait 117173 00:14:27.579 [2024-02-13 07:15:01.260473] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.579 [2024-02-13 07:15:01.260611] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.964 ************************************ 00:14:28.964 END TEST raid_state_function_test 00:14:28.964 ************************************ 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:28.964 00:14:28.964 real 0m10.407s 00:14:28.964 user 0m18.220s 00:14:28.964 sys 0m1.275s 00:14:28.964 07:15:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:28.964 07:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:14:28.964 07:15:02 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:14:28.964 07:15:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:28.964 07:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.964 ************************************ 00:14:28.964 START TEST raid_state_function_test_sb 00:14:28.964 ************************************ 00:14:28.964 07:15:02 -- common/autotest_common.sh@1102 -- # raid_state_function_test concat 2 true 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=117516 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 117516' 00:14:28.964 Process raid pid: 117516 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:28.964 07:15:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 117516 /var/tmp/spdk-raid.sock 00:14:28.964 07:15:02 -- common/autotest_common.sh@817 -- # '[' -z 117516 ']' 00:14:28.964 07:15:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:28.964 07:15:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:28.964 07:15:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:28.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:28.964 07:15:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:28.964 07:15:02 -- common/autotest_common.sh@10 -- # set +x 00:14:28.964 [2024-02-13 07:15:02.459048] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:14:28.964 [2024-02-13 07:15:02.459268] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.964 [2024-02-13 07:15:02.628348] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.223 [2024-02-13 07:15:02.821912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.482 [2024-02-13 07:15:03.013838] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.741 07:15:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:29.741 07:15:03 -- common/autotest_common.sh@850 -- # return 0 00:14:29.741 07:15:03 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:30.000 [2024-02-13 07:15:03.587034] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:30.000 [2024-02-13 07:15:03.587141] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:30.000 [2024-02-13 07:15:03.587154] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:30.000 [2024-02-13 07:15:03.587176] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:30.000 07:15:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:30.000 07:15:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:30.000 07:15:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:30.000 07:15:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:30.000 07:15:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:30.000 07:15:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:30.000 07:15:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:30.000 07:15:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:30.000 07:15:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:30.000 07:15:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:30.000 07:15:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.000 07:15:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.258 07:15:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:30.258 "name": "Existed_Raid", 00:14:30.258 "uuid": "6ab43473-86cc-4f42-96b9-447802139eb5", 00:14:30.258 "strip_size_kb": 64, 00:14:30.258 "state": "configuring", 00:14:30.258 "raid_level": "concat", 00:14:30.258 "superblock": true, 00:14:30.258 "num_base_bdevs": 2, 00:14:30.258 "num_base_bdevs_discovered": 0, 00:14:30.258 "num_base_bdevs_operational": 2, 00:14:30.258 "base_bdevs_list": [ 00:14:30.258 { 00:14:30.258 "name": "BaseBdev1", 00:14:30.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.258 "is_configured": false, 00:14:30.258 "data_offset": 0, 00:14:30.259 "data_size": 0 00:14:30.259 }, 00:14:30.259 { 00:14:30.259 "name": "BaseBdev2", 00:14:30.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.259 "is_configured": false, 00:14:30.259 "data_offset": 0, 00:14:30.259 "data_size": 0 00:14:30.259 } 00:14:30.259 ] 00:14:30.259 }' 00:14:30.259 07:15:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:30.259 07:15:03 -- common/autotest_common.sh@10 -- # set +x 00:14:30.827 07:15:04 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:31.087 [2024-02-13 07:15:04.687055] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:31.087 [2024-02-13 07:15:04.687112] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:31.087 07:15:04 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:31.346 [2024-02-13 07:15:04.931216] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.346 [2024-02-13 07:15:04.931331] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.346 [2024-02-13 07:15:04.931344] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:31.346 [2024-02-13 07:15:04.931367] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:31.346 07:15:04 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:31.605 [2024-02-13 07:15:05.212993] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.605 BaseBdev1 00:14:31.605 07:15:05 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:31.605 07:15:05 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:14:31.605 07:15:05 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:31.605 07:15:05 -- common/autotest_common.sh@887 -- # local i 00:14:31.605 07:15:05 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:31.605 07:15:05 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:31.605 07:15:05 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:31.865 07:15:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:32.124 [ 00:14:32.124 { 00:14:32.124 "name": "BaseBdev1", 00:14:32.124 "aliases": [ 00:14:32.124 "c1901023-37a5-4c69-b048-6bc18c731153" 00:14:32.124 ], 00:14:32.124 "product_name": "Malloc disk", 00:14:32.124 "block_size": 512, 00:14:32.124 "num_blocks": 65536, 00:14:32.124 "uuid": "c1901023-37a5-4c69-b048-6bc18c731153", 00:14:32.124 "assigned_rate_limits": { 00:14:32.124 "rw_ios_per_sec": 0, 00:14:32.124 "rw_mbytes_per_sec": 0, 00:14:32.124 "r_mbytes_per_sec": 0, 00:14:32.124 "w_mbytes_per_sec": 0 00:14:32.124 }, 00:14:32.124 "claimed": true, 00:14:32.124 "claim_type": "exclusive_write", 00:14:32.124 "zoned": false, 00:14:32.124 "supported_io_types": { 00:14:32.124 "read": true, 00:14:32.124 "write": true, 00:14:32.124 "unmap": true, 00:14:32.124 "write_zeroes": true, 00:14:32.124 "flush": true, 00:14:32.124 "reset": true, 00:14:32.124 "compare": false, 00:14:32.124 "compare_and_write": false, 00:14:32.124 "abort": true, 00:14:32.125 "nvme_admin": false, 00:14:32.125 "nvme_io": false 00:14:32.125 }, 00:14:32.125 "memory_domains": [ 00:14:32.125 { 00:14:32.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.125 "dma_device_type": 2 00:14:32.125 } 00:14:32.125 ], 00:14:32.125 "driver_specific": {} 00:14:32.125 } 00:14:32.125 ] 00:14:32.125 07:15:05 -- common/autotest_common.sh@893 -- # return 0 00:14:32.125 07:15:05 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:32.125 07:15:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:32.125 07:15:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:32.125 07:15:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:32.125 07:15:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:32.125 07:15:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:32.125 07:15:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:32.125 07:15:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:32.125 07:15:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:32.125 07:15:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:32.125 07:15:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.125 07:15:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.383 07:15:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:32.383 "name": "Existed_Raid", 00:14:32.383 "uuid": "ee63f2f3-fc14-46d2-96ac-d17dcc5ec3ac", 00:14:32.383 "strip_size_kb": 64, 00:14:32.383 "state": "configuring", 00:14:32.384 "raid_level": "concat", 00:14:32.384 "superblock": true, 00:14:32.384 "num_base_bdevs": 2, 00:14:32.384 "num_base_bdevs_discovered": 1, 00:14:32.384 "num_base_bdevs_operational": 2, 00:14:32.384 "base_bdevs_list": [ 00:14:32.384 { 00:14:32.384 "name": "BaseBdev1", 00:14:32.384 "uuid": "c1901023-37a5-4c69-b048-6bc18c731153", 00:14:32.384 "is_configured": true, 00:14:32.384 "data_offset": 2048, 00:14:32.384 "data_size": 63488 00:14:32.384 }, 00:14:32.384 { 00:14:32.384 "name": "BaseBdev2", 00:14:32.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.384 "is_configured": false, 00:14:32.384 "data_offset": 0, 00:14:32.384 "data_size": 0 00:14:32.384 } 00:14:32.384 ] 00:14:32.384 }' 00:14:32.384 07:15:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:32.384 07:15:05 -- common/autotest_common.sh@10 -- # set +x 00:14:32.952 07:15:06 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:33.211 [2024-02-13 07:15:06.757541] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:33.211 [2024-02-13 07:15:06.757650] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:14:33.211 07:15:06 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:14:33.211 07:15:06 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:33.470 07:15:07 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:33.729 BaseBdev1 00:14:33.729 07:15:07 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:14:33.729 07:15:07 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:14:33.729 07:15:07 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:33.729 07:15:07 -- common/autotest_common.sh@887 -- # local i 00:14:33.729 07:15:07 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:33.729 07:15:07 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:33.729 07:15:07 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:33.989 07:15:07 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:34.248 [ 00:14:34.248 { 00:14:34.248 "name": "BaseBdev1", 00:14:34.248 "aliases": [ 00:14:34.248 "fe33338c-51f6-4de1-9ced-868b25f76318" 00:14:34.248 ], 00:14:34.248 "product_name": "Malloc disk", 00:14:34.248 "block_size": 512, 00:14:34.248 "num_blocks": 65536, 00:14:34.248 "uuid": "fe33338c-51f6-4de1-9ced-868b25f76318", 00:14:34.248 "assigned_rate_limits": { 00:14:34.248 "rw_ios_per_sec": 0, 00:14:34.248 "rw_mbytes_per_sec": 0, 00:14:34.248 "r_mbytes_per_sec": 0, 00:14:34.248 "w_mbytes_per_sec": 0 00:14:34.248 }, 00:14:34.248 "claimed": false, 00:14:34.248 "zoned": false, 00:14:34.248 "supported_io_types": { 00:14:34.248 "read": true, 00:14:34.248 "write": true, 00:14:34.248 "unmap": true, 00:14:34.248 "write_zeroes": true, 00:14:34.248 "flush": true, 00:14:34.248 "reset": true, 00:14:34.248 "compare": false, 00:14:34.248 "compare_and_write": false, 00:14:34.248 "abort": true, 00:14:34.248 "nvme_admin": false, 00:14:34.248 "nvme_io": false 00:14:34.248 }, 00:14:34.248 "memory_domains": [ 00:14:34.248 { 00:14:34.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.248 "dma_device_type": 2 00:14:34.248 } 00:14:34.248 ], 00:14:34.248 "driver_specific": {} 00:14:34.248 } 00:14:34.248 ] 00:14:34.248 07:15:07 -- common/autotest_common.sh@893 -- # return 0 00:14:34.248 07:15:07 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:34.507 [2024-02-13 07:15:07.997867] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.507 [2024-02-13 07:15:07.999973] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:34.507 [2024-02-13 07:15:08.000051] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:34.507 07:15:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:34.507 07:15:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:34.507 07:15:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:14:34.507 07:15:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:34.507 07:15:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:34.507 07:15:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:34.507 07:15:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:34.507 07:15:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:34.507 07:15:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:34.507 07:15:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:34.507 07:15:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:34.507 07:15:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:34.507 07:15:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.507 07:15:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.766 07:15:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:34.766 "name": "Existed_Raid", 00:14:34.766 "uuid": "407dabe1-7ec4-41e0-b762-be00e3be6235", 00:14:34.766 "strip_size_kb": 64, 00:14:34.766 "state": "configuring", 00:14:34.766 "raid_level": "concat", 00:14:34.766 "superblock": true, 00:14:34.766 "num_base_bdevs": 2, 00:14:34.766 "num_base_bdevs_discovered": 1, 00:14:34.766 "num_base_bdevs_operational": 2, 00:14:34.766 "base_bdevs_list": [ 00:14:34.766 { 00:14:34.766 "name": "BaseBdev1", 00:14:34.766 "uuid": "fe33338c-51f6-4de1-9ced-868b25f76318", 00:14:34.766 "is_configured": true, 00:14:34.766 "data_offset": 2048, 00:14:34.766 "data_size": 63488 00:14:34.766 }, 00:14:34.766 { 00:14:34.766 "name": "BaseBdev2", 00:14:34.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.766 "is_configured": false, 00:14:34.766 "data_offset": 0, 00:14:34.766 "data_size": 0 00:14:34.766 } 00:14:34.766 ] 00:14:34.766 }' 00:14:34.766 07:15:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:34.767 07:15:08 -- common/autotest_common.sh@10 -- # set +x 00:14:35.334 07:15:09 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:35.903 [2024-02-13 07:15:09.318981] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:35.903 [2024-02-13 07:15:09.319263] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:14:35.903 [2024-02-13 07:15:09.319279] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:35.903 [2024-02-13 07:15:09.319433] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:14:35.903 BaseBdev2 00:14:35.903 [2024-02-13 07:15:09.319810] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:14:35.903 [2024-02-13 07:15:09.319836] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:14:35.903 [2024-02-13 07:15:09.319993] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.903 07:15:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:35.903 07:15:09 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:14:35.903 07:15:09 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:35.903 07:15:09 -- common/autotest_common.sh@887 -- # local i 00:14:35.903 07:15:09 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:35.903 07:15:09 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:35.903 07:15:09 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:35.903 07:15:09 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:36.163 [ 00:14:36.163 { 00:14:36.163 "name": "BaseBdev2", 00:14:36.163 "aliases": [ 00:14:36.163 "d9728e04-9685-4c56-9f8f-254b872cb0ab" 00:14:36.163 ], 00:14:36.163 "product_name": "Malloc disk", 00:14:36.163 "block_size": 512, 00:14:36.163 "num_blocks": 65536, 00:14:36.163 "uuid": "d9728e04-9685-4c56-9f8f-254b872cb0ab", 00:14:36.163 "assigned_rate_limits": { 00:14:36.163 "rw_ios_per_sec": 0, 00:14:36.163 "rw_mbytes_per_sec": 0, 00:14:36.163 "r_mbytes_per_sec": 0, 00:14:36.163 "w_mbytes_per_sec": 0 00:14:36.163 }, 00:14:36.163 "claimed": true, 00:14:36.163 "claim_type": "exclusive_write", 00:14:36.163 "zoned": false, 00:14:36.163 "supported_io_types": { 00:14:36.163 "read": true, 00:14:36.163 "write": true, 00:14:36.163 "unmap": true, 00:14:36.163 "write_zeroes": true, 00:14:36.163 "flush": true, 00:14:36.163 "reset": true, 00:14:36.163 "compare": false, 00:14:36.163 "compare_and_write": false, 00:14:36.163 "abort": true, 00:14:36.163 "nvme_admin": false, 00:14:36.163 "nvme_io": false 00:14:36.163 }, 00:14:36.163 "memory_domains": [ 00:14:36.163 { 00:14:36.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.163 "dma_device_type": 2 00:14:36.163 } 00:14:36.163 ], 00:14:36.163 "driver_specific": {} 00:14:36.163 } 00:14:36.163 ] 00:14:36.163 07:15:09 -- common/autotest_common.sh@893 -- # return 0 00:14:36.163 07:15:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:36.163 07:15:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:36.163 07:15:09 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:14:36.163 07:15:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:36.163 07:15:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:36.163 07:15:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:36.163 07:15:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:36.163 07:15:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:36.163 07:15:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:36.163 07:15:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:36.163 07:15:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:36.163 07:15:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:36.163 07:15:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.163 07:15:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.422 07:15:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:36.422 "name": "Existed_Raid", 00:14:36.422 "uuid": "407dabe1-7ec4-41e0-b762-be00e3be6235", 00:14:36.422 "strip_size_kb": 64, 00:14:36.422 "state": "online", 00:14:36.422 "raid_level": "concat", 00:14:36.422 "superblock": true, 00:14:36.422 "num_base_bdevs": 2, 00:14:36.422 "num_base_bdevs_discovered": 2, 00:14:36.422 "num_base_bdevs_operational": 2, 00:14:36.422 "base_bdevs_list": [ 00:14:36.422 { 00:14:36.422 "name": "BaseBdev1", 00:14:36.422 "uuid": "fe33338c-51f6-4de1-9ced-868b25f76318", 00:14:36.422 "is_configured": true, 00:14:36.422 "data_offset": 2048, 00:14:36.422 "data_size": 63488 00:14:36.422 }, 00:14:36.422 { 00:14:36.422 "name": "BaseBdev2", 00:14:36.422 "uuid": "d9728e04-9685-4c56-9f8f-254b872cb0ab", 00:14:36.422 "is_configured": true, 00:14:36.422 "data_offset": 2048, 00:14:36.422 "data_size": 63488 00:14:36.422 } 00:14:36.422 ] 00:14:36.422 }' 00:14:36.422 07:15:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:36.422 07:15:10 -- common/autotest_common.sh@10 -- # set +x 00:14:37.357 07:15:10 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:37.357 [2024-02-13 07:15:10.983548] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:37.357 [2024-02-13 07:15:10.983592] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.357 [2024-02-13 07:15:10.983685] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.616 07:15:11 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:37.616 07:15:11 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:14:37.616 07:15:11 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:37.616 07:15:11 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:37.616 07:15:11 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:14:37.616 07:15:11 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:14:37.616 07:15:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:37.616 07:15:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:14:37.616 07:15:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:37.616 07:15:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:37.616 07:15:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:37.616 07:15:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:37.616 07:15:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:37.616 07:15:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:37.616 07:15:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:37.616 07:15:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.616 07:15:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.874 07:15:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:37.874 "name": "Existed_Raid", 00:14:37.874 "uuid": "407dabe1-7ec4-41e0-b762-be00e3be6235", 00:14:37.874 "strip_size_kb": 64, 00:14:37.874 "state": "offline", 00:14:37.874 "raid_level": "concat", 00:14:37.874 "superblock": true, 00:14:37.874 "num_base_bdevs": 2, 00:14:37.874 "num_base_bdevs_discovered": 1, 00:14:37.874 "num_base_bdevs_operational": 1, 00:14:37.874 "base_bdevs_list": [ 00:14:37.874 { 00:14:37.874 "name": null, 00:14:37.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.874 "is_configured": false, 00:14:37.874 "data_offset": 2048, 00:14:37.874 "data_size": 63488 00:14:37.874 }, 00:14:37.874 { 00:14:37.874 "name": "BaseBdev2", 00:14:37.874 "uuid": "d9728e04-9685-4c56-9f8f-254b872cb0ab", 00:14:37.874 "is_configured": true, 00:14:37.874 "data_offset": 2048, 00:14:37.874 "data_size": 63488 00:14:37.874 } 00:14:37.874 ] 00:14:37.874 }' 00:14:37.874 07:15:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:37.874 07:15:11 -- common/autotest_common.sh@10 -- # set +x 00:14:38.442 07:15:11 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:38.442 07:15:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:38.442 07:15:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.442 07:15:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:38.701 07:15:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:38.701 07:15:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:38.701 07:15:12 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:38.701 [2024-02-13 07:15:12.363241] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:38.701 [2024-02-13 07:15:12.363354] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:14:38.960 07:15:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:38.960 07:15:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:38.960 07:15:12 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.960 07:15:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:39.219 07:15:12 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:39.219 07:15:12 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:39.219 07:15:12 -- bdev/bdev_raid.sh@287 -- # killprocess 117516 00:14:39.219 07:15:12 -- common/autotest_common.sh@924 -- # '[' -z 117516 ']' 00:14:39.219 07:15:12 -- common/autotest_common.sh@928 -- # kill -0 117516 00:14:39.219 07:15:12 -- common/autotest_common.sh@929 -- # uname 00:14:39.219 07:15:12 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:39.219 07:15:12 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 117516 00:14:39.219 killing process with pid 117516 00:14:39.219 07:15:12 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:14:39.219 07:15:12 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:14:39.219 07:15:12 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 117516' 00:14:39.219 07:15:12 -- common/autotest_common.sh@943 -- # kill 117516 00:14:39.219 07:15:12 -- common/autotest_common.sh@948 -- # wait 117516 00:14:39.219 [2024-02-13 07:15:12.709174] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.219 [2024-02-13 07:15:12.709682] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:40.162 ************************************ 00:14:40.162 END TEST raid_state_function_test_sb 00:14:40.162 ************************************ 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:14:40.162 00:14:40.162 real 0m11.397s 00:14:40.162 user 0m19.843s 00:14:40.162 sys 0m1.470s 00:14:40.162 07:15:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:40.162 07:15:13 -- common/autotest_common.sh@10 -- # set +x 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:14:40.162 07:15:13 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:14:40.162 07:15:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:40.162 07:15:13 -- common/autotest_common.sh@10 -- # set +x 00:14:40.162 ************************************ 00:14:40.162 START TEST raid_superblock_test 00:14:40.162 ************************************ 00:14:40.162 07:15:13 -- common/autotest_common.sh@1102 -- # raid_superblock_test concat 2 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:14:40.162 07:15:13 -- bdev/bdev_raid.sh@357 -- # raid_pid=117870 00:14:40.163 07:15:13 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:40.163 07:15:13 -- bdev/bdev_raid.sh@358 -- # waitforlisten 117870 /var/tmp/spdk-raid.sock 00:14:40.163 07:15:13 -- common/autotest_common.sh@817 -- # '[' -z 117870 ']' 00:14:40.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:40.163 07:15:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:40.163 07:15:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:40.163 07:15:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:40.163 07:15:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:40.163 07:15:13 -- common/autotest_common.sh@10 -- # set +x 00:14:40.422 [2024-02-13 07:15:13.905725] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:14:40.422 [2024-02-13 07:15:13.905882] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117870 ] 00:14:40.422 [2024-02-13 07:15:14.070900] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.681 [2024-02-13 07:15:14.315143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.940 [2024-02-13 07:15:14.500821] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:41.199 07:15:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:41.200 07:15:14 -- common/autotest_common.sh@850 -- # return 0 00:14:41.200 07:15:14 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:14:41.200 07:15:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:41.200 07:15:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:14:41.200 07:15:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:14:41.200 07:15:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:41.200 07:15:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:41.200 07:15:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:41.200 07:15:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:41.200 07:15:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:41.459 malloc1 00:14:41.459 07:15:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:41.718 [2024-02-13 07:15:15.365799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:41.718 [2024-02-13 07:15:15.365917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.718 [2024-02-13 07:15:15.365954] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:14:41.718 [2024-02-13 07:15:15.366005] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.718 [2024-02-13 07:15:15.368409] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.718 [2024-02-13 07:15:15.368466] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:41.718 pt1 00:14:41.718 07:15:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:41.718 07:15:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:41.718 07:15:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:14:41.718 07:15:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:14:41.718 07:15:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:41.718 07:15:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:41.718 07:15:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:14:41.718 07:15:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:41.718 07:15:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:41.977 malloc2 00:14:41.977 07:15:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:42.236 [2024-02-13 07:15:15.832499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:42.236 [2024-02-13 07:15:15.832588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.236 [2024-02-13 07:15:15.832631] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:14:42.236 [2024-02-13 07:15:15.832687] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.236 [2024-02-13 07:15:15.835125] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.236 [2024-02-13 07:15:15.835184] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:42.236 pt2 00:14:42.237 07:15:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:14:42.237 07:15:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:14:42.237 07:15:15 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:14:42.495 [2024-02-13 07:15:16.092680] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:42.495 [2024-02-13 07:15:16.094479] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:42.495 [2024-02-13 07:15:16.094681] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:14:42.496 [2024-02-13 07:15:16.094695] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:42.496 [2024-02-13 07:15:16.094858] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:14:42.496 [2024-02-13 07:15:16.095195] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:14:42.496 [2024-02-13 07:15:16.095233] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:14:42.496 [2024-02-13 07:15:16.095392] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.496 07:15:16 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:42.496 07:15:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:42.496 07:15:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:42.496 07:15:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:42.496 07:15:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:42.496 07:15:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:42.496 07:15:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:42.496 07:15:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:42.496 07:15:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:42.496 07:15:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:42.496 07:15:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.496 07:15:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.755 07:15:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:42.755 "name": "raid_bdev1", 00:14:42.755 "uuid": "5752a8a5-678c-4be0-93e3-8b092bd7dbb0", 00:14:42.755 "strip_size_kb": 64, 00:14:42.755 "state": "online", 00:14:42.755 "raid_level": "concat", 00:14:42.755 "superblock": true, 00:14:42.755 "num_base_bdevs": 2, 00:14:42.755 "num_base_bdevs_discovered": 2, 00:14:42.755 "num_base_bdevs_operational": 2, 00:14:42.755 "base_bdevs_list": [ 00:14:42.755 { 00:14:42.755 "name": "pt1", 00:14:42.755 "uuid": "d3d6e714-3afd-5de4-b6de-ac9b6b575cdb", 00:14:42.755 "is_configured": true, 00:14:42.755 "data_offset": 2048, 00:14:42.755 "data_size": 63488 00:14:42.755 }, 00:14:42.755 { 00:14:42.755 "name": "pt2", 00:14:42.755 "uuid": "0a1e3ec2-f121-58d1-b597-4c794e181bc3", 00:14:42.755 "is_configured": true, 00:14:42.755 "data_offset": 2048, 00:14:42.755 "data_size": 63488 00:14:42.755 } 00:14:42.755 ] 00:14:42.755 }' 00:14:42.755 07:15:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:42.755 07:15:16 -- common/autotest_common.sh@10 -- # set +x 00:14:43.322 07:15:16 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:43.322 07:15:16 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:14:43.582 [2024-02-13 07:15:17.197087] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.582 07:15:17 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=5752a8a5-678c-4be0-93e3-8b092bd7dbb0 00:14:43.582 07:15:17 -- bdev/bdev_raid.sh@380 -- # '[' -z 5752a8a5-678c-4be0-93e3-8b092bd7dbb0 ']' 00:14:43.582 07:15:17 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:43.841 [2024-02-13 07:15:17.456861] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:43.841 [2024-02-13 07:15:17.456889] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:43.841 [2024-02-13 07:15:17.456986] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.841 [2024-02-13 07:15:17.457044] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.841 [2024-02-13 07:15:17.457054] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:14:43.841 07:15:17 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.841 07:15:17 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:14:44.101 07:15:17 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:14:44.101 07:15:17 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:14:44.101 07:15:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:44.101 07:15:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:14:44.359 07:15:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:14:44.359 07:15:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:14:44.619 07:15:18 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:14:44.619 07:15:18 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:44.877 07:15:18 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:14:44.877 07:15:18 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:44.877 07:15:18 -- common/autotest_common.sh@638 -- # local es=0 00:14:44.877 07:15:18 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:44.877 07:15:18 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.877 07:15:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:44.877 07:15:18 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.877 07:15:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:44.877 07:15:18 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.877 07:15:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:44.877 07:15:18 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:44.877 07:15:18 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:44.877 07:15:18 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:14:45.135 [2024-02-13 07:15:18.669192] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:45.135 [2024-02-13 07:15:18.671502] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:45.135 [2024-02-13 07:15:18.671654] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:14:45.135 [2024-02-13 07:15:18.671746] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:14:45.135 [2024-02-13 07:15:18.671784] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:45.135 [2024-02-13 07:15:18.671796] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:14:45.135 request: 00:14:45.135 { 00:14:45.135 "name": "raid_bdev1", 00:14:45.135 "raid_level": "concat", 00:14:45.135 "base_bdevs": [ 00:14:45.135 "malloc1", 00:14:45.135 "malloc2" 00:14:45.135 ], 00:14:45.135 "superblock": false, 00:14:45.135 "strip_size_kb": 64, 00:14:45.135 "method": "bdev_raid_create", 00:14:45.135 "req_id": 1 00:14:45.135 } 00:14:45.135 Got JSON-RPC error response 00:14:45.135 response: 00:14:45.135 { 00:14:45.135 "code": -17, 00:14:45.135 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:45.135 } 00:14:45.135 07:15:18 -- common/autotest_common.sh@641 -- # es=1 00:14:45.135 07:15:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:45.135 07:15:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:45.135 07:15:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:45.135 07:15:18 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.135 07:15:18 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:14:45.393 07:15:18 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:14:45.393 07:15:18 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:14:45.393 07:15:18 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:45.652 [2024-02-13 07:15:19.185256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:45.652 [2024-02-13 07:15:19.185379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.652 [2024-02-13 07:15:19.185423] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:45.652 [2024-02-13 07:15:19.185451] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.652 [2024-02-13 07:15:19.188117] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.652 [2024-02-13 07:15:19.188176] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:45.652 [2024-02-13 07:15:19.188279] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:14:45.652 [2024-02-13 07:15:19.188340] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:45.652 pt1 00:14:45.652 07:15:19 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:14:45.652 07:15:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:45.652 07:15:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:45.652 07:15:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:45.652 07:15:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:45.652 07:15:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:45.652 07:15:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:45.652 07:15:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:45.652 07:15:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:45.652 07:15:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:45.652 07:15:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.652 07:15:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.911 07:15:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:45.911 "name": "raid_bdev1", 00:14:45.911 "uuid": "5752a8a5-678c-4be0-93e3-8b092bd7dbb0", 00:14:45.911 "strip_size_kb": 64, 00:14:45.911 "state": "configuring", 00:14:45.911 "raid_level": "concat", 00:14:45.911 "superblock": true, 00:14:45.911 "num_base_bdevs": 2, 00:14:45.911 "num_base_bdevs_discovered": 1, 00:14:45.911 "num_base_bdevs_operational": 2, 00:14:45.911 "base_bdevs_list": [ 00:14:45.911 { 00:14:45.911 "name": "pt1", 00:14:45.911 "uuid": "d3d6e714-3afd-5de4-b6de-ac9b6b575cdb", 00:14:45.911 "is_configured": true, 00:14:45.911 "data_offset": 2048, 00:14:45.911 "data_size": 63488 00:14:45.911 }, 00:14:45.911 { 00:14:45.911 "name": null, 00:14:45.911 "uuid": "0a1e3ec2-f121-58d1-b597-4c794e181bc3", 00:14:45.911 "is_configured": false, 00:14:45.911 "data_offset": 2048, 00:14:45.911 "data_size": 63488 00:14:45.911 } 00:14:45.911 ] 00:14:45.911 }' 00:14:45.911 07:15:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:45.911 07:15:19 -- common/autotest_common.sh@10 -- # set +x 00:14:46.846 07:15:20 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:14:46.846 07:15:20 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:14:46.846 07:15:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:46.846 07:15:20 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:46.846 [2024-02-13 07:15:20.421800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:46.846 [2024-02-13 07:15:20.421959] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:46.846 [2024-02-13 07:15:20.422004] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:46.846 [2024-02-13 07:15:20.422031] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:46.846 [2024-02-13 07:15:20.422590] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:46.846 [2024-02-13 07:15:20.422639] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:46.846 [2024-02-13 07:15:20.422750] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:14:46.846 [2024-02-13 07:15:20.422779] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:46.846 [2024-02-13 07:15:20.422910] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:14:46.846 [2024-02-13 07:15:20.422922] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:46.846 [2024-02-13 07:15:20.423082] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:14:46.846 [2024-02-13 07:15:20.423441] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:14:46.846 [2024-02-13 07:15:20.423455] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:14:46.846 [2024-02-13 07:15:20.423602] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.846 pt2 00:14:46.846 07:15:20 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:14:46.846 07:15:20 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:14:46.847 07:15:20 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:14:46.847 07:15:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:46.847 07:15:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:46.847 07:15:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:14:46.847 07:15:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:14:46.847 07:15:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:46.847 07:15:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:46.847 07:15:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:46.847 07:15:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:46.847 07:15:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:46.847 07:15:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.847 07:15:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.105 07:15:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:47.105 "name": "raid_bdev1", 00:14:47.105 "uuid": "5752a8a5-678c-4be0-93e3-8b092bd7dbb0", 00:14:47.105 "strip_size_kb": 64, 00:14:47.105 "state": "online", 00:14:47.105 "raid_level": "concat", 00:14:47.105 "superblock": true, 00:14:47.105 "num_base_bdevs": 2, 00:14:47.105 "num_base_bdevs_discovered": 2, 00:14:47.105 "num_base_bdevs_operational": 2, 00:14:47.105 "base_bdevs_list": [ 00:14:47.105 { 00:14:47.105 "name": "pt1", 00:14:47.105 "uuid": "d3d6e714-3afd-5de4-b6de-ac9b6b575cdb", 00:14:47.105 "is_configured": true, 00:14:47.105 "data_offset": 2048, 00:14:47.105 "data_size": 63488 00:14:47.105 }, 00:14:47.105 { 00:14:47.105 "name": "pt2", 00:14:47.105 "uuid": "0a1e3ec2-f121-58d1-b597-4c794e181bc3", 00:14:47.105 "is_configured": true, 00:14:47.105 "data_offset": 2048, 00:14:47.105 "data_size": 63488 00:14:47.105 } 00:14:47.105 ] 00:14:47.105 }' 00:14:47.105 07:15:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:47.105 07:15:20 -- common/autotest_common.sh@10 -- # set +x 00:14:48.041 07:15:21 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:48.041 07:15:21 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:14:48.041 [2024-02-13 07:15:21.658492] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:48.041 07:15:21 -- bdev/bdev_raid.sh@430 -- # '[' 5752a8a5-678c-4be0-93e3-8b092bd7dbb0 '!=' 5752a8a5-678c-4be0-93e3-8b092bd7dbb0 ']' 00:14:48.041 07:15:21 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:14:48.041 07:15:21 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:48.041 07:15:21 -- bdev/bdev_raid.sh@197 -- # return 1 00:14:48.041 07:15:21 -- bdev/bdev_raid.sh@511 -- # killprocess 117870 00:14:48.041 07:15:21 -- common/autotest_common.sh@924 -- # '[' -z 117870 ']' 00:14:48.041 07:15:21 -- common/autotest_common.sh@928 -- # kill -0 117870 00:14:48.041 07:15:21 -- common/autotest_common.sh@929 -- # uname 00:14:48.041 07:15:21 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:48.041 07:15:21 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 117870 00:14:48.041 killing process with pid 117870 00:14:48.041 07:15:21 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:14:48.041 07:15:21 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:14:48.041 07:15:21 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 117870' 00:14:48.041 07:15:21 -- common/autotest_common.sh@943 -- # kill 117870 00:14:48.041 07:15:21 -- common/autotest_common.sh@948 -- # wait 117870 00:14:48.041 [2024-02-13 07:15:21.699063] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.041 [2024-02-13 07:15:21.699193] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.041 [2024-02-13 07:15:21.699259] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.041 [2024-02-13 07:15:21.699276] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:14:48.299 [2024-02-13 07:15:21.860656] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:49.677 ************************************ 00:14:49.677 END TEST raid_superblock_test 00:14:49.677 ************************************ 00:14:49.677 07:15:22 -- bdev/bdev_raid.sh@513 -- # return 0 00:14:49.677 00:14:49.677 real 0m9.126s 00:14:49.677 user 0m15.737s 00:14:49.677 sys 0m1.053s 00:14:49.677 07:15:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:49.677 07:15:22 -- common/autotest_common.sh@10 -- # set +x 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:14:49.677 07:15:23 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:14:49.677 07:15:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:14:49.677 07:15:23 -- common/autotest_common.sh@10 -- # set +x 00:14:49.677 ************************************ 00:14:49.677 START TEST raid_state_function_test 00:14:49.677 ************************************ 00:14:49.677 07:15:23 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid1 2 false 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=118143 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118143' 00:14:49.677 Process raid pid: 118143 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 118143 /var/tmp/spdk-raid.sock 00:14:49.677 07:15:23 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:49.677 07:15:23 -- common/autotest_common.sh@817 -- # '[' -z 118143 ']' 00:14:49.677 07:15:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:49.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:49.677 07:15:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:49.677 07:15:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:49.677 07:15:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:49.677 07:15:23 -- common/autotest_common.sh@10 -- # set +x 00:14:49.677 [2024-02-13 07:15:23.116402] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:14:49.677 [2024-02-13 07:15:23.117397] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.677 [2024-02-13 07:15:23.291218] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.936 [2024-02-13 07:15:23.511202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.196 [2024-02-13 07:15:23.706794] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.455 07:15:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:50.455 07:15:24 -- common/autotest_common.sh@850 -- # return 0 00:14:50.455 07:15:24 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:50.714 [2024-02-13 07:15:24.292719] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.714 [2024-02-13 07:15:24.292986] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.714 [2024-02-13 07:15:24.293125] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:50.714 [2024-02-13 07:15:24.293278] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:50.714 07:15:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:50.714 07:15:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:50.714 07:15:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:50.714 07:15:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:50.714 07:15:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:50.714 07:15:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:50.714 07:15:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:50.714 07:15:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:50.714 07:15:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:50.714 07:15:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:50.714 07:15:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.714 07:15:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.976 07:15:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:50.976 "name": "Existed_Raid", 00:14:50.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.976 "strip_size_kb": 0, 00:14:50.976 "state": "configuring", 00:14:50.976 "raid_level": "raid1", 00:14:50.976 "superblock": false, 00:14:50.976 "num_base_bdevs": 2, 00:14:50.976 "num_base_bdevs_discovered": 0, 00:14:50.976 "num_base_bdevs_operational": 2, 00:14:50.976 "base_bdevs_list": [ 00:14:50.976 { 00:14:50.976 "name": "BaseBdev1", 00:14:50.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.976 "is_configured": false, 00:14:50.976 "data_offset": 0, 00:14:50.976 "data_size": 0 00:14:50.976 }, 00:14:50.976 { 00:14:50.976 "name": "BaseBdev2", 00:14:50.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.976 "is_configured": false, 00:14:50.976 "data_offset": 0, 00:14:50.976 "data_size": 0 00:14:50.976 } 00:14:50.976 ] 00:14:50.976 }' 00:14:50.976 07:15:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:50.976 07:15:24 -- common/autotest_common.sh@10 -- # set +x 00:14:51.945 07:15:25 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:51.945 [2024-02-13 07:15:25.540904] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:51.945 [2024-02-13 07:15:25.541202] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:14:51.945 07:15:25 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:52.216 [2024-02-13 07:15:25.753015] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:52.216 [2024-02-13 07:15:25.753383] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:52.216 [2024-02-13 07:15:25.753497] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:52.216 [2024-02-13 07:15:25.753562] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:52.216 07:15:25 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:52.474 [2024-02-13 07:15:26.038102] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.474 BaseBdev1 00:14:52.474 07:15:26 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:14:52.474 07:15:26 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:14:52.474 07:15:26 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:52.474 07:15:26 -- common/autotest_common.sh@887 -- # local i 00:14:52.474 07:15:26 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:52.474 07:15:26 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:52.474 07:15:26 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:52.733 07:15:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:52.992 [ 00:14:52.992 { 00:14:52.992 "name": "BaseBdev1", 00:14:52.992 "aliases": [ 00:14:52.992 "a834e656-7045-4ec0-b2e3-1a39beadd4aa" 00:14:52.992 ], 00:14:52.992 "product_name": "Malloc disk", 00:14:52.992 "block_size": 512, 00:14:52.992 "num_blocks": 65536, 00:14:52.992 "uuid": "a834e656-7045-4ec0-b2e3-1a39beadd4aa", 00:14:52.992 "assigned_rate_limits": { 00:14:52.992 "rw_ios_per_sec": 0, 00:14:52.992 "rw_mbytes_per_sec": 0, 00:14:52.992 "r_mbytes_per_sec": 0, 00:14:52.992 "w_mbytes_per_sec": 0 00:14:52.992 }, 00:14:52.992 "claimed": true, 00:14:52.992 "claim_type": "exclusive_write", 00:14:52.992 "zoned": false, 00:14:52.992 "supported_io_types": { 00:14:52.992 "read": true, 00:14:52.992 "write": true, 00:14:52.992 "unmap": true, 00:14:52.992 "write_zeroes": true, 00:14:52.992 "flush": true, 00:14:52.992 "reset": true, 00:14:52.992 "compare": false, 00:14:52.992 "compare_and_write": false, 00:14:52.992 "abort": true, 00:14:52.992 "nvme_admin": false, 00:14:52.992 "nvme_io": false 00:14:52.992 }, 00:14:52.992 "memory_domains": [ 00:14:52.992 { 00:14:52.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.992 "dma_device_type": 2 00:14:52.992 } 00:14:52.992 ], 00:14:52.992 "driver_specific": {} 00:14:52.992 } 00:14:52.992 ] 00:14:52.992 07:15:26 -- common/autotest_common.sh@893 -- # return 0 00:14:52.992 07:15:26 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:52.992 07:15:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:52.992 07:15:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:52.992 07:15:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:52.992 07:15:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:52.992 07:15:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:52.992 07:15:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:52.992 07:15:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:52.992 07:15:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:52.992 07:15:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:52.992 07:15:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.992 07:15:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.251 07:15:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:53.251 "name": "Existed_Raid", 00:14:53.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.251 "strip_size_kb": 0, 00:14:53.251 "state": "configuring", 00:14:53.251 "raid_level": "raid1", 00:14:53.251 "superblock": false, 00:14:53.251 "num_base_bdevs": 2, 00:14:53.251 "num_base_bdevs_discovered": 1, 00:14:53.251 "num_base_bdevs_operational": 2, 00:14:53.251 "base_bdevs_list": [ 00:14:53.251 { 00:14:53.251 "name": "BaseBdev1", 00:14:53.251 "uuid": "a834e656-7045-4ec0-b2e3-1a39beadd4aa", 00:14:53.251 "is_configured": true, 00:14:53.251 "data_offset": 0, 00:14:53.251 "data_size": 65536 00:14:53.251 }, 00:14:53.251 { 00:14:53.251 "name": "BaseBdev2", 00:14:53.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.251 "is_configured": false, 00:14:53.251 "data_offset": 0, 00:14:53.251 "data_size": 0 00:14:53.251 } 00:14:53.251 ] 00:14:53.251 }' 00:14:53.251 07:15:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:53.251 07:15:26 -- common/autotest_common.sh@10 -- # set +x 00:14:53.818 07:15:27 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:54.076 [2024-02-13 07:15:27.682760] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:54.076 [2024-02-13 07:15:27.682976] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:14:54.076 07:15:27 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:14:54.076 07:15:27 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:14:54.335 [2024-02-13 07:15:27.910886] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.335 [2024-02-13 07:15:27.913406] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.335 [2024-02-13 07:15:27.913606] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.335 07:15:27 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:14:54.335 07:15:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:54.335 07:15:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:54.335 07:15:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:54.335 07:15:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:14:54.335 07:15:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:54.335 07:15:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:54.335 07:15:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:54.335 07:15:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:54.335 07:15:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:54.335 07:15:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:54.335 07:15:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:54.335 07:15:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.335 07:15:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.594 07:15:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:54.594 "name": "Existed_Raid", 00:14:54.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.594 "strip_size_kb": 0, 00:14:54.594 "state": "configuring", 00:14:54.594 "raid_level": "raid1", 00:14:54.594 "superblock": false, 00:14:54.594 "num_base_bdevs": 2, 00:14:54.594 "num_base_bdevs_discovered": 1, 00:14:54.594 "num_base_bdevs_operational": 2, 00:14:54.594 "base_bdevs_list": [ 00:14:54.594 { 00:14:54.594 "name": "BaseBdev1", 00:14:54.594 "uuid": "a834e656-7045-4ec0-b2e3-1a39beadd4aa", 00:14:54.594 "is_configured": true, 00:14:54.594 "data_offset": 0, 00:14:54.594 "data_size": 65536 00:14:54.594 }, 00:14:54.594 { 00:14:54.594 "name": "BaseBdev2", 00:14:54.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.594 "is_configured": false, 00:14:54.594 "data_offset": 0, 00:14:54.594 "data_size": 0 00:14:54.594 } 00:14:54.594 ] 00:14:54.594 }' 00:14:54.594 07:15:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:54.594 07:15:28 -- common/autotest_common.sh@10 -- # set +x 00:14:55.530 07:15:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:55.789 [2024-02-13 07:15:29.280650] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:55.789 [2024-02-13 07:15:29.280738] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:14:55.789 [2024-02-13 07:15:29.280750] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:55.789 [2024-02-13 07:15:29.280886] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005520 00:14:55.789 [2024-02-13 07:15:29.281337] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:14:55.789 [2024-02-13 07:15:29.281365] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:14:55.789 [2024-02-13 07:15:29.281700] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.789 BaseBdev2 00:14:55.789 07:15:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:14:55.789 07:15:29 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:14:55.789 07:15:29 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:55.789 07:15:29 -- common/autotest_common.sh@887 -- # local i 00:14:55.789 07:15:29 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:55.789 07:15:29 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:55.789 07:15:29 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:56.048 07:15:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:56.307 [ 00:14:56.307 { 00:14:56.307 "name": "BaseBdev2", 00:14:56.307 "aliases": [ 00:14:56.307 "a7e1451c-72af-48a0-8554-5a0511265bef" 00:14:56.307 ], 00:14:56.307 "product_name": "Malloc disk", 00:14:56.307 "block_size": 512, 00:14:56.307 "num_blocks": 65536, 00:14:56.307 "uuid": "a7e1451c-72af-48a0-8554-5a0511265bef", 00:14:56.307 "assigned_rate_limits": { 00:14:56.307 "rw_ios_per_sec": 0, 00:14:56.307 "rw_mbytes_per_sec": 0, 00:14:56.307 "r_mbytes_per_sec": 0, 00:14:56.307 "w_mbytes_per_sec": 0 00:14:56.307 }, 00:14:56.307 "claimed": true, 00:14:56.307 "claim_type": "exclusive_write", 00:14:56.307 "zoned": false, 00:14:56.307 "supported_io_types": { 00:14:56.307 "read": true, 00:14:56.307 "write": true, 00:14:56.307 "unmap": true, 00:14:56.307 "write_zeroes": true, 00:14:56.307 "flush": true, 00:14:56.307 "reset": true, 00:14:56.307 "compare": false, 00:14:56.307 "compare_and_write": false, 00:14:56.307 "abort": true, 00:14:56.307 "nvme_admin": false, 00:14:56.307 "nvme_io": false 00:14:56.307 }, 00:14:56.307 "memory_domains": [ 00:14:56.307 { 00:14:56.307 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.307 "dma_device_type": 2 00:14:56.307 } 00:14:56.307 ], 00:14:56.307 "driver_specific": {} 00:14:56.307 } 00:14:56.307 ] 00:14:56.307 07:15:29 -- common/autotest_common.sh@893 -- # return 0 00:14:56.307 07:15:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:14:56.307 07:15:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:14:56.307 07:15:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:56.307 07:15:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:56.307 07:15:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:56.307 07:15:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:56.307 07:15:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:56.307 07:15:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:56.307 07:15:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:56.307 07:15:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:56.307 07:15:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:56.307 07:15:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:56.307 07:15:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.307 07:15:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.566 07:15:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:56.566 "name": "Existed_Raid", 00:14:56.566 "uuid": "237864e3-6981-465d-ad61-8f1cb38b0ea7", 00:14:56.566 "strip_size_kb": 0, 00:14:56.566 "state": "online", 00:14:56.566 "raid_level": "raid1", 00:14:56.566 "superblock": false, 00:14:56.566 "num_base_bdevs": 2, 00:14:56.566 "num_base_bdevs_discovered": 2, 00:14:56.566 "num_base_bdevs_operational": 2, 00:14:56.566 "base_bdevs_list": [ 00:14:56.566 { 00:14:56.566 "name": "BaseBdev1", 00:14:56.566 "uuid": "a834e656-7045-4ec0-b2e3-1a39beadd4aa", 00:14:56.566 "is_configured": true, 00:14:56.566 "data_offset": 0, 00:14:56.566 "data_size": 65536 00:14:56.566 }, 00:14:56.566 { 00:14:56.566 "name": "BaseBdev2", 00:14:56.566 "uuid": "a7e1451c-72af-48a0-8554-5a0511265bef", 00:14:56.566 "is_configured": true, 00:14:56.566 "data_offset": 0, 00:14:56.566 "data_size": 65536 00:14:56.566 } 00:14:56.566 ] 00:14:56.566 }' 00:14:56.566 07:15:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:56.566 07:15:30 -- common/autotest_common.sh@10 -- # set +x 00:14:57.134 07:15:30 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:57.394 [2024-02-13 07:15:30.909090] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:57.394 07:15:31 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:14:57.394 07:15:31 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:14:57.394 07:15:31 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:14:57.394 07:15:31 -- bdev/bdev_raid.sh@196 -- # return 0 00:14:57.394 07:15:31 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:14:57.394 07:15:31 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:57.394 07:15:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:14:57.394 07:15:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:57.394 07:15:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:57.394 07:15:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:57.394 07:15:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:57.394 07:15:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:57.394 07:15:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:57.394 07:15:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:57.394 07:15:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:57.394 07:15:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.394 07:15:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.653 07:15:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:57.653 "name": "Existed_Raid", 00:14:57.653 "uuid": "237864e3-6981-465d-ad61-8f1cb38b0ea7", 00:14:57.653 "strip_size_kb": 0, 00:14:57.653 "state": "online", 00:14:57.653 "raid_level": "raid1", 00:14:57.653 "superblock": false, 00:14:57.653 "num_base_bdevs": 2, 00:14:57.653 "num_base_bdevs_discovered": 1, 00:14:57.653 "num_base_bdevs_operational": 1, 00:14:57.653 "base_bdevs_list": [ 00:14:57.653 { 00:14:57.653 "name": null, 00:14:57.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.653 "is_configured": false, 00:14:57.653 "data_offset": 0, 00:14:57.653 "data_size": 65536 00:14:57.653 }, 00:14:57.653 { 00:14:57.653 "name": "BaseBdev2", 00:14:57.653 "uuid": "a7e1451c-72af-48a0-8554-5a0511265bef", 00:14:57.653 "is_configured": true, 00:14:57.653 "data_offset": 0, 00:14:57.653 "data_size": 65536 00:14:57.653 } 00:14:57.653 ] 00:14:57.653 }' 00:14:57.653 07:15:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:57.653 07:15:31 -- common/autotest_common.sh@10 -- # set +x 00:14:58.221 07:15:31 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:14:58.221 07:15:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:58.221 07:15:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.221 07:15:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:14:58.789 07:15:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:14:58.789 07:15:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:58.789 07:15:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:58.789 [2024-02-13 07:15:32.400212] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:58.789 [2024-02-13 07:15:32.400265] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.789 [2024-02-13 07:15:32.400341] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.789 [2024-02-13 07:15:32.465789] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.789 [2024-02-13 07:15:32.465830] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:14:58.789 07:15:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:14:58.789 07:15:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:14:59.048 07:15:32 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.049 07:15:32 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:14:59.049 07:15:32 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:14:59.049 07:15:32 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:14:59.049 07:15:32 -- bdev/bdev_raid.sh@287 -- # killprocess 118143 00:14:59.049 07:15:32 -- common/autotest_common.sh@924 -- # '[' -z 118143 ']' 00:14:59.049 07:15:32 -- common/autotest_common.sh@928 -- # kill -0 118143 00:14:59.308 07:15:32 -- common/autotest_common.sh@929 -- # uname 00:14:59.308 07:15:32 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:14:59.308 07:15:32 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 118143 00:14:59.308 killing process with pid 118143 00:14:59.308 07:15:32 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:14:59.308 07:15:32 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:14:59.308 07:15:32 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 118143' 00:14:59.308 07:15:32 -- common/autotest_common.sh@943 -- # kill 118143 00:14:59.308 07:15:32 -- common/autotest_common.sh@948 -- # wait 118143 00:14:59.308 [2024-02-13 07:15:32.763085] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.308 [2024-02-13 07:15:32.763234] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:00.244 ************************************ 00:15:00.244 END TEST raid_state_function_test 00:15:00.244 ************************************ 00:15:00.244 07:15:33 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:00.244 00:15:00.244 real 0m10.777s 00:15:00.244 user 0m18.958s 00:15:00.244 sys 0m1.195s 00:15:00.244 07:15:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:00.244 07:15:33 -- common/autotest_common.sh@10 -- # set +x 00:15:00.244 07:15:33 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:15:00.245 07:15:33 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:15:00.245 07:15:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:00.245 07:15:33 -- common/autotest_common.sh@10 -- # set +x 00:15:00.245 ************************************ 00:15:00.245 START TEST raid_state_function_test_sb 00:15:00.245 ************************************ 00:15:00.245 07:15:33 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid1 2 true 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@226 -- # raid_pid=118488 00:15:00.245 Process raid pid: 118488 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 118488' 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@228 -- # waitforlisten 118488 /var/tmp/spdk-raid.sock 00:15:00.245 07:15:33 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:00.245 07:15:33 -- common/autotest_common.sh@817 -- # '[' -z 118488 ']' 00:15:00.245 07:15:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:00.245 07:15:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:00.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:00.245 07:15:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:00.245 07:15:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:00.245 07:15:33 -- common/autotest_common.sh@10 -- # set +x 00:15:00.245 [2024-02-13 07:15:33.933751] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:15:00.245 [2024-02-13 07:15:33.933952] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.504 [2024-02-13 07:15:34.093413] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.762 [2024-02-13 07:15:34.323343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.021 [2024-02-13 07:15:34.517875] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.280 07:15:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:01.280 07:15:34 -- common/autotest_common.sh@850 -- # return 0 00:15:01.280 07:15:34 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:01.539 [2024-02-13 07:15:35.091769] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:01.539 [2024-02-13 07:15:35.091879] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:01.539 [2024-02-13 07:15:35.091903] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:01.539 [2024-02-13 07:15:35.091921] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:01.539 07:15:35 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:01.539 07:15:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:01.539 07:15:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:01.539 07:15:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:01.539 07:15:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:01.539 07:15:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:01.539 07:15:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:01.539 07:15:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:01.539 07:15:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:01.539 07:15:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:01.539 07:15:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:01.539 07:15:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.798 07:15:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:01.798 "name": "Existed_Raid", 00:15:01.798 "uuid": "e42623c5-91f7-483b-b182-2baad26b3f02", 00:15:01.798 "strip_size_kb": 0, 00:15:01.798 "state": "configuring", 00:15:01.798 "raid_level": "raid1", 00:15:01.798 "superblock": true, 00:15:01.798 "num_base_bdevs": 2, 00:15:01.798 "num_base_bdevs_discovered": 0, 00:15:01.798 "num_base_bdevs_operational": 2, 00:15:01.798 "base_bdevs_list": [ 00:15:01.798 { 00:15:01.798 "name": "BaseBdev1", 00:15:01.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.798 "is_configured": false, 00:15:01.798 "data_offset": 0, 00:15:01.798 "data_size": 0 00:15:01.798 }, 00:15:01.798 { 00:15:01.798 "name": "BaseBdev2", 00:15:01.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.798 "is_configured": false, 00:15:01.798 "data_offset": 0, 00:15:01.798 "data_size": 0 00:15:01.798 } 00:15:01.798 ] 00:15:01.798 }' 00:15:01.798 07:15:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:01.798 07:15:35 -- common/autotest_common.sh@10 -- # set +x 00:15:02.366 07:15:35 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:02.625 [2024-02-13 07:15:36.191747] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:02.625 [2024-02-13 07:15:36.191818] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:02.625 07:15:36 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:02.884 [2024-02-13 07:15:36.467839] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:02.884 [2024-02-13 07:15:36.467942] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:02.884 [2024-02-13 07:15:36.467967] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:02.884 [2024-02-13 07:15:36.467991] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:02.884 07:15:36 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:03.143 [2024-02-13 07:15:36.719276] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.143 BaseBdev1 00:15:03.143 07:15:36 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:03.143 07:15:36 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:15:03.143 07:15:36 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:03.143 07:15:36 -- common/autotest_common.sh@887 -- # local i 00:15:03.143 07:15:36 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:03.143 07:15:36 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:03.143 07:15:36 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:03.401 07:15:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:03.660 [ 00:15:03.660 { 00:15:03.660 "name": "BaseBdev1", 00:15:03.660 "aliases": [ 00:15:03.660 "b90b8803-94d4-4725-a770-1f8c4e8a9428" 00:15:03.660 ], 00:15:03.660 "product_name": "Malloc disk", 00:15:03.660 "block_size": 512, 00:15:03.660 "num_blocks": 65536, 00:15:03.661 "uuid": "b90b8803-94d4-4725-a770-1f8c4e8a9428", 00:15:03.661 "assigned_rate_limits": { 00:15:03.661 "rw_ios_per_sec": 0, 00:15:03.661 "rw_mbytes_per_sec": 0, 00:15:03.661 "r_mbytes_per_sec": 0, 00:15:03.661 "w_mbytes_per_sec": 0 00:15:03.661 }, 00:15:03.661 "claimed": true, 00:15:03.661 "claim_type": "exclusive_write", 00:15:03.661 "zoned": false, 00:15:03.661 "supported_io_types": { 00:15:03.661 "read": true, 00:15:03.661 "write": true, 00:15:03.661 "unmap": true, 00:15:03.661 "write_zeroes": true, 00:15:03.661 "flush": true, 00:15:03.661 "reset": true, 00:15:03.661 "compare": false, 00:15:03.661 "compare_and_write": false, 00:15:03.661 "abort": true, 00:15:03.661 "nvme_admin": false, 00:15:03.661 "nvme_io": false 00:15:03.661 }, 00:15:03.661 "memory_domains": [ 00:15:03.661 { 00:15:03.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.661 "dma_device_type": 2 00:15:03.661 } 00:15:03.661 ], 00:15:03.661 "driver_specific": {} 00:15:03.661 } 00:15:03.661 ] 00:15:03.661 07:15:37 -- common/autotest_common.sh@893 -- # return 0 00:15:03.661 07:15:37 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:03.661 07:15:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:03.661 07:15:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:03.661 07:15:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:03.661 07:15:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:03.661 07:15:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:03.661 07:15:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:03.661 07:15:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:03.661 07:15:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:03.661 07:15:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:03.661 07:15:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:03.661 07:15:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.920 07:15:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:03.920 "name": "Existed_Raid", 00:15:03.920 "uuid": "141a6d95-deff-41cb-872a-949f9afe7f41", 00:15:03.920 "strip_size_kb": 0, 00:15:03.920 "state": "configuring", 00:15:03.920 "raid_level": "raid1", 00:15:03.920 "superblock": true, 00:15:03.920 "num_base_bdevs": 2, 00:15:03.920 "num_base_bdevs_discovered": 1, 00:15:03.920 "num_base_bdevs_operational": 2, 00:15:03.920 "base_bdevs_list": [ 00:15:03.920 { 00:15:03.920 "name": "BaseBdev1", 00:15:03.920 "uuid": "b90b8803-94d4-4725-a770-1f8c4e8a9428", 00:15:03.920 "is_configured": true, 00:15:03.920 "data_offset": 2048, 00:15:03.920 "data_size": 63488 00:15:03.920 }, 00:15:03.920 { 00:15:03.920 "name": "BaseBdev2", 00:15:03.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.920 "is_configured": false, 00:15:03.920 "data_offset": 0, 00:15:03.920 "data_size": 0 00:15:03.920 } 00:15:03.920 ] 00:15:03.920 }' 00:15:03.920 07:15:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:03.920 07:15:37 -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 07:15:38 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:04.745 [2024-02-13 07:15:38.343659] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:04.745 [2024-02-13 07:15:38.343750] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:04.745 07:15:38 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:04.745 07:15:38 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:05.004 07:15:38 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:05.262 BaseBdev1 00:15:05.262 07:15:38 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:05.262 07:15:38 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:15:05.262 07:15:38 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:05.262 07:15:38 -- common/autotest_common.sh@887 -- # local i 00:15:05.262 07:15:38 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:05.262 07:15:38 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:05.262 07:15:38 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:05.520 07:15:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:05.778 [ 00:15:05.778 { 00:15:05.778 "name": "BaseBdev1", 00:15:05.778 "aliases": [ 00:15:05.778 "820dc519-f6be-4030-a99d-4c0b26fe56e1" 00:15:05.778 ], 00:15:05.778 "product_name": "Malloc disk", 00:15:05.778 "block_size": 512, 00:15:05.778 "num_blocks": 65536, 00:15:05.778 "uuid": "820dc519-f6be-4030-a99d-4c0b26fe56e1", 00:15:05.778 "assigned_rate_limits": { 00:15:05.778 "rw_ios_per_sec": 0, 00:15:05.778 "rw_mbytes_per_sec": 0, 00:15:05.778 "r_mbytes_per_sec": 0, 00:15:05.778 "w_mbytes_per_sec": 0 00:15:05.778 }, 00:15:05.778 "claimed": false, 00:15:05.778 "zoned": false, 00:15:05.778 "supported_io_types": { 00:15:05.778 "read": true, 00:15:05.778 "write": true, 00:15:05.778 "unmap": true, 00:15:05.778 "write_zeroes": true, 00:15:05.778 "flush": true, 00:15:05.778 "reset": true, 00:15:05.778 "compare": false, 00:15:05.778 "compare_and_write": false, 00:15:05.778 "abort": true, 00:15:05.778 "nvme_admin": false, 00:15:05.778 "nvme_io": false 00:15:05.778 }, 00:15:05.778 "memory_domains": [ 00:15:05.778 { 00:15:05.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.778 "dma_device_type": 2 00:15:05.778 } 00:15:05.778 ], 00:15:05.778 "driver_specific": {} 00:15:05.778 } 00:15:05.778 ] 00:15:05.778 07:15:39 -- common/autotest_common.sh@893 -- # return 0 00:15:05.778 07:15:39 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:15:06.036 [2024-02-13 07:15:39.602069] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.036 [2024-02-13 07:15:39.604299] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:06.036 [2024-02-13 07:15:39.604388] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:06.036 07:15:39 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:06.036 07:15:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:06.036 07:15:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:06.036 07:15:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:06.036 07:15:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:06.036 07:15:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:06.036 07:15:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:06.036 07:15:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:06.036 07:15:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:06.036 07:15:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:06.036 07:15:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:06.036 07:15:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:06.036 07:15:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.036 07:15:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.295 07:15:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:06.295 "name": "Existed_Raid", 00:15:06.295 "uuid": "13ac84f8-b7c9-48fa-9a88-0d29e97734ba", 00:15:06.295 "strip_size_kb": 0, 00:15:06.295 "state": "configuring", 00:15:06.295 "raid_level": "raid1", 00:15:06.295 "superblock": true, 00:15:06.295 "num_base_bdevs": 2, 00:15:06.295 "num_base_bdevs_discovered": 1, 00:15:06.295 "num_base_bdevs_operational": 2, 00:15:06.295 "base_bdevs_list": [ 00:15:06.295 { 00:15:06.295 "name": "BaseBdev1", 00:15:06.295 "uuid": "820dc519-f6be-4030-a99d-4c0b26fe56e1", 00:15:06.295 "is_configured": true, 00:15:06.295 "data_offset": 2048, 00:15:06.295 "data_size": 63488 00:15:06.295 }, 00:15:06.295 { 00:15:06.295 "name": "BaseBdev2", 00:15:06.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.295 "is_configured": false, 00:15:06.295 "data_offset": 0, 00:15:06.295 "data_size": 0 00:15:06.295 } 00:15:06.295 ] 00:15:06.295 }' 00:15:06.295 07:15:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:06.295 07:15:39 -- common/autotest_common.sh@10 -- # set +x 00:15:07.231 07:15:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:07.231 [2024-02-13 07:15:40.835660] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:07.231 [2024-02-13 07:15:40.835992] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:15:07.231 [2024-02-13 07:15:40.836008] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:07.231 [2024-02-13 07:15:40.836169] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:07.231 [2024-02-13 07:15:40.836534] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:15:07.231 [2024-02-13 07:15:40.836556] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:15:07.231 [2024-02-13 07:15:40.836728] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.231 BaseBdev2 00:15:07.231 07:15:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:07.231 07:15:40 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:15:07.231 07:15:40 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:07.231 07:15:40 -- common/autotest_common.sh@887 -- # local i 00:15:07.231 07:15:40 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:07.231 07:15:40 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:07.231 07:15:40 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:07.490 07:15:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:07.748 [ 00:15:07.748 { 00:15:07.748 "name": "BaseBdev2", 00:15:07.748 "aliases": [ 00:15:07.748 "f606c9e3-a3ee-4786-9d35-c35665b43ef6" 00:15:07.748 ], 00:15:07.748 "product_name": "Malloc disk", 00:15:07.748 "block_size": 512, 00:15:07.748 "num_blocks": 65536, 00:15:07.748 "uuid": "f606c9e3-a3ee-4786-9d35-c35665b43ef6", 00:15:07.748 "assigned_rate_limits": { 00:15:07.748 "rw_ios_per_sec": 0, 00:15:07.748 "rw_mbytes_per_sec": 0, 00:15:07.748 "r_mbytes_per_sec": 0, 00:15:07.748 "w_mbytes_per_sec": 0 00:15:07.748 }, 00:15:07.748 "claimed": true, 00:15:07.748 "claim_type": "exclusive_write", 00:15:07.748 "zoned": false, 00:15:07.748 "supported_io_types": { 00:15:07.748 "read": true, 00:15:07.748 "write": true, 00:15:07.748 "unmap": true, 00:15:07.748 "write_zeroes": true, 00:15:07.748 "flush": true, 00:15:07.748 "reset": true, 00:15:07.748 "compare": false, 00:15:07.748 "compare_and_write": false, 00:15:07.748 "abort": true, 00:15:07.748 "nvme_admin": false, 00:15:07.748 "nvme_io": false 00:15:07.748 }, 00:15:07.748 "memory_domains": [ 00:15:07.748 { 00:15:07.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.748 "dma_device_type": 2 00:15:07.748 } 00:15:07.748 ], 00:15:07.748 "driver_specific": {} 00:15:07.748 } 00:15:07.748 ] 00:15:07.748 07:15:41 -- common/autotest_common.sh@893 -- # return 0 00:15:07.748 07:15:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:07.748 07:15:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:07.748 07:15:41 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:07.748 07:15:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:07.748 07:15:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:07.748 07:15:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:07.748 07:15:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:07.748 07:15:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:07.748 07:15:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:07.748 07:15:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:07.748 07:15:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:07.748 07:15:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:07.748 07:15:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.748 07:15:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.007 07:15:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:08.007 "name": "Existed_Raid", 00:15:08.007 "uuid": "13ac84f8-b7c9-48fa-9a88-0d29e97734ba", 00:15:08.007 "strip_size_kb": 0, 00:15:08.007 "state": "online", 00:15:08.007 "raid_level": "raid1", 00:15:08.007 "superblock": true, 00:15:08.007 "num_base_bdevs": 2, 00:15:08.007 "num_base_bdevs_discovered": 2, 00:15:08.007 "num_base_bdevs_operational": 2, 00:15:08.007 "base_bdevs_list": [ 00:15:08.007 { 00:15:08.007 "name": "BaseBdev1", 00:15:08.007 "uuid": "820dc519-f6be-4030-a99d-4c0b26fe56e1", 00:15:08.007 "is_configured": true, 00:15:08.007 "data_offset": 2048, 00:15:08.007 "data_size": 63488 00:15:08.007 }, 00:15:08.007 { 00:15:08.007 "name": "BaseBdev2", 00:15:08.007 "uuid": "f606c9e3-a3ee-4786-9d35-c35665b43ef6", 00:15:08.007 "is_configured": true, 00:15:08.007 "data_offset": 2048, 00:15:08.007 "data_size": 63488 00:15:08.007 } 00:15:08.007 ] 00:15:08.007 }' 00:15:08.007 07:15:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:08.007 07:15:41 -- common/autotest_common.sh@10 -- # set +x 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:08.942 [2024-02-13 07:15:42.472207] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.942 07:15:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.200 07:15:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:09.200 "name": "Existed_Raid", 00:15:09.200 "uuid": "13ac84f8-b7c9-48fa-9a88-0d29e97734ba", 00:15:09.200 "strip_size_kb": 0, 00:15:09.200 "state": "online", 00:15:09.200 "raid_level": "raid1", 00:15:09.200 "superblock": true, 00:15:09.200 "num_base_bdevs": 2, 00:15:09.200 "num_base_bdevs_discovered": 1, 00:15:09.200 "num_base_bdevs_operational": 1, 00:15:09.200 "base_bdevs_list": [ 00:15:09.200 { 00:15:09.200 "name": null, 00:15:09.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.200 "is_configured": false, 00:15:09.200 "data_offset": 2048, 00:15:09.200 "data_size": 63488 00:15:09.200 }, 00:15:09.200 { 00:15:09.200 "name": "BaseBdev2", 00:15:09.200 "uuid": "f606c9e3-a3ee-4786-9d35-c35665b43ef6", 00:15:09.200 "is_configured": true, 00:15:09.200 "data_offset": 2048, 00:15:09.200 "data_size": 63488 00:15:09.200 } 00:15:09.200 ] 00:15:09.200 }' 00:15:09.200 07:15:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:09.200 07:15:42 -- common/autotest_common.sh@10 -- # set +x 00:15:10.135 07:15:43 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:10.135 07:15:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:10.135 07:15:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.135 07:15:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:10.135 07:15:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:10.135 07:15:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:10.135 07:15:43 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:10.394 [2024-02-13 07:15:43.951083] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:10.394 [2024-02-13 07:15:43.951120] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:10.394 [2024-02-13 07:15:43.951190] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:10.394 [2024-02-13 07:15:44.021240] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:10.394 [2024-02-13 07:15:44.021278] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:15:10.394 07:15:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:10.394 07:15:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:10.394 07:15:44 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:10.394 07:15:44 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.664 07:15:44 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:10.664 07:15:44 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:10.664 07:15:44 -- bdev/bdev_raid.sh@287 -- # killprocess 118488 00:15:10.664 07:15:44 -- common/autotest_common.sh@924 -- # '[' -z 118488 ']' 00:15:10.664 07:15:44 -- common/autotest_common.sh@928 -- # kill -0 118488 00:15:10.664 07:15:44 -- common/autotest_common.sh@929 -- # uname 00:15:10.664 07:15:44 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:15:10.664 07:15:44 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 118488 00:15:10.664 killing process with pid 118488 00:15:10.664 07:15:44 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:15:10.664 07:15:44 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:15:10.664 07:15:44 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 118488' 00:15:10.664 07:15:44 -- common/autotest_common.sh@943 -- # kill 118488 00:15:10.664 07:15:44 -- common/autotest_common.sh@948 -- # wait 118488 00:15:10.664 [2024-02-13 07:15:44.294390] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:10.664 [2024-02-13 07:15:44.294550] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:11.611 ************************************ 00:15:11.611 END TEST raid_state_function_test_sb 00:15:11.611 ************************************ 00:15:11.611 07:15:45 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:11.611 00:15:11.611 real 0m11.401s 00:15:11.611 user 0m20.055s 00:15:11.611 sys 0m1.380s 00:15:11.611 07:15:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:11.611 07:15:45 -- common/autotest_common.sh@10 -- # set +x 00:15:11.869 07:15:45 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:15:11.869 07:15:45 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:15:11.869 07:15:45 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:11.869 07:15:45 -- common/autotest_common.sh@10 -- # set +x 00:15:11.869 ************************************ 00:15:11.869 START TEST raid_superblock_test 00:15:11.869 ************************************ 00:15:11.869 07:15:45 -- common/autotest_common.sh@1102 -- # raid_superblock_test raid1 2 00:15:11.869 07:15:45 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:15:11.869 07:15:45 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:15:11.869 07:15:45 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:11.869 07:15:45 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:11.869 07:15:45 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:11.870 07:15:45 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:11.870 07:15:45 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:11.870 07:15:45 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:11.870 07:15:45 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:11.870 07:15:45 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:11.870 07:15:45 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:11.870 07:15:45 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:11.870 07:15:45 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:11.870 07:15:45 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:15:11.870 07:15:45 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:15:11.870 07:15:45 -- bdev/bdev_raid.sh@357 -- # raid_pid=118836 00:15:11.870 07:15:45 -- bdev/bdev_raid.sh@358 -- # waitforlisten 118836 /var/tmp/spdk-raid.sock 00:15:11.870 07:15:45 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:11.870 07:15:45 -- common/autotest_common.sh@817 -- # '[' -z 118836 ']' 00:15:11.870 07:15:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:11.870 07:15:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:11.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:11.870 07:15:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:11.870 07:15:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:11.870 07:15:45 -- common/autotest_common.sh@10 -- # set +x 00:15:11.870 [2024-02-13 07:15:45.393789] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:15:11.870 [2024-02-13 07:15:45.393940] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118836 ] 00:15:11.870 [2024-02-13 07:15:45.548088] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.128 [2024-02-13 07:15:45.730401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.385 [2024-02-13 07:15:45.910556] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:12.643 07:15:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:12.643 07:15:46 -- common/autotest_common.sh@850 -- # return 0 00:15:12.643 07:15:46 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:12.643 07:15:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:12.643 07:15:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:12.643 07:15:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:12.643 07:15:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:12.643 07:15:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:12.643 07:15:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:12.643 07:15:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:12.644 07:15:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:12.901 malloc1 00:15:12.901 07:15:46 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:13.160 [2024-02-13 07:15:46.804248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:13.160 [2024-02-13 07:15:46.804331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.160 [2024-02-13 07:15:46.804363] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:13.160 [2024-02-13 07:15:46.804411] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.160 [2024-02-13 07:15:46.806449] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.160 [2024-02-13 07:15:46.806497] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:13.160 pt1 00:15:13.160 07:15:46 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:13.160 07:15:46 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:13.160 07:15:46 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:13.160 07:15:46 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:13.160 07:15:46 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:13.160 07:15:46 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:13.160 07:15:46 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:13.160 07:15:46 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:13.160 07:15:46 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:13.418 malloc2 00:15:13.676 07:15:47 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:13.676 [2024-02-13 07:15:47.298611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:13.676 [2024-02-13 07:15:47.298743] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.676 [2024-02-13 07:15:47.298797] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:13.676 [2024-02-13 07:15:47.298861] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.676 [2024-02-13 07:15:47.301419] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.676 [2024-02-13 07:15:47.301477] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:13.676 pt2 00:15:13.676 07:15:47 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:13.676 07:15:47 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:13.676 07:15:47 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:15:13.934 [2024-02-13 07:15:47.486659] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:13.934 [2024-02-13 07:15:47.488261] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:13.934 [2024-02-13 07:15:47.488458] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007e80 00:15:13.934 [2024-02-13 07:15:47.488474] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:13.934 [2024-02-13 07:15:47.488603] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:13.934 [2024-02-13 07:15:47.488943] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007e80 00:15:13.934 [2024-02-13 07:15:47.488966] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000007e80 00:15:13.934 [2024-02-13 07:15:47.489111] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.934 07:15:47 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:13.935 07:15:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:13.935 07:15:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:13.935 07:15:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:13.935 07:15:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:13.935 07:15:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:13.935 07:15:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:13.935 07:15:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:13.935 07:15:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:13.935 07:15:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:13.935 07:15:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.935 07:15:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.193 07:15:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:14.193 "name": "raid_bdev1", 00:15:14.193 "uuid": "7d0d4696-9185-42ac-aa28-d4a7b0d76fee", 00:15:14.193 "strip_size_kb": 0, 00:15:14.193 "state": "online", 00:15:14.193 "raid_level": "raid1", 00:15:14.193 "superblock": true, 00:15:14.193 "num_base_bdevs": 2, 00:15:14.193 "num_base_bdevs_discovered": 2, 00:15:14.193 "num_base_bdevs_operational": 2, 00:15:14.193 "base_bdevs_list": [ 00:15:14.193 { 00:15:14.193 "name": "pt1", 00:15:14.193 "uuid": "fd0e8ae2-271d-502c-a663-9300060cb817", 00:15:14.193 "is_configured": true, 00:15:14.193 "data_offset": 2048, 00:15:14.193 "data_size": 63488 00:15:14.193 }, 00:15:14.193 { 00:15:14.193 "name": "pt2", 00:15:14.193 "uuid": "b66b10f8-5ae0-59c9-a80c-1b237b52e6b7", 00:15:14.193 "is_configured": true, 00:15:14.193 "data_offset": 2048, 00:15:14.193 "data_size": 63488 00:15:14.193 } 00:15:14.193 ] 00:15:14.193 }' 00:15:14.193 07:15:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:14.193 07:15:47 -- common/autotest_common.sh@10 -- # set +x 00:15:14.760 07:15:48 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:14.760 07:15:48 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:15.018 [2024-02-13 07:15:48.626937] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.018 07:15:48 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=7d0d4696-9185-42ac-aa28-d4a7b0d76fee 00:15:15.018 07:15:48 -- bdev/bdev_raid.sh@380 -- # '[' -z 7d0d4696-9185-42ac-aa28-d4a7b0d76fee ']' 00:15:15.018 07:15:48 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:15.277 [2024-02-13 07:15:48.838811] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:15.277 [2024-02-13 07:15:48.838835] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:15.277 [2024-02-13 07:15:48.838907] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.277 [2024-02-13 07:15:48.838963] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.277 [2024-02-13 07:15:48.838975] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007e80 name raid_bdev1, state offline 00:15:15.277 07:15:48 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.277 07:15:48 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:15.535 07:15:49 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:15.535 07:15:49 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:15.535 07:15:49 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:15.535 07:15:49 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:15.794 07:15:49 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:15.794 07:15:49 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:16.052 07:15:49 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:16.052 07:15:49 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:16.052 07:15:49 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:16.052 07:15:49 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:16.052 07:15:49 -- common/autotest_common.sh@638 -- # local es=0 00:15:16.052 07:15:49 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:16.052 07:15:49 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.052 07:15:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:16.052 07:15:49 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.052 07:15:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:16.052 07:15:49 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.052 07:15:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:16.052 07:15:49 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:16.052 07:15:49 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:16.052 07:15:49 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:15:16.310 [2024-02-13 07:15:49.963023] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:16.310 [2024-02-13 07:15:49.964684] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:16.310 [2024-02-13 07:15:49.964755] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:16.310 [2024-02-13 07:15:49.964849] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:16.310 [2024-02-13 07:15:49.964885] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.310 [2024-02-13 07:15:49.964896] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state configuring 00:15:16.310 request: 00:15:16.310 { 00:15:16.310 "name": "raid_bdev1", 00:15:16.310 "raid_level": "raid1", 00:15:16.310 "base_bdevs": [ 00:15:16.310 "malloc1", 00:15:16.310 "malloc2" 00:15:16.310 ], 00:15:16.310 "superblock": false, 00:15:16.310 "method": "bdev_raid_create", 00:15:16.310 "req_id": 1 00:15:16.310 } 00:15:16.310 Got JSON-RPC error response 00:15:16.310 response: 00:15:16.310 { 00:15:16.310 "code": -17, 00:15:16.310 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:16.310 } 00:15:16.310 07:15:49 -- common/autotest_common.sh@641 -- # es=1 00:15:16.311 07:15:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:16.311 07:15:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:16.311 07:15:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:16.311 07:15:49 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.311 07:15:49 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:16.569 07:15:50 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:16.569 07:15:50 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:16.569 07:15:50 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:16.827 [2024-02-13 07:15:50.399052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:16.827 [2024-02-13 07:15:50.399163] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.827 [2024-02-13 07:15:50.399206] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:16.827 [2024-02-13 07:15:50.399238] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.827 [2024-02-13 07:15:50.401588] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.827 [2024-02-13 07:15:50.401644] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:16.827 [2024-02-13 07:15:50.401773] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:16.827 [2024-02-13 07:15:50.401844] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:16.827 pt1 00:15:16.827 07:15:50 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:16.827 07:15:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:16.827 07:15:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:16.827 07:15:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:16.827 07:15:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:16.827 07:15:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:16.827 07:15:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:16.827 07:15:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:16.827 07:15:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:16.827 07:15:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:16.827 07:15:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.827 07:15:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.085 07:15:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:17.085 "name": "raid_bdev1", 00:15:17.085 "uuid": "7d0d4696-9185-42ac-aa28-d4a7b0d76fee", 00:15:17.085 "strip_size_kb": 0, 00:15:17.085 "state": "configuring", 00:15:17.085 "raid_level": "raid1", 00:15:17.085 "superblock": true, 00:15:17.085 "num_base_bdevs": 2, 00:15:17.085 "num_base_bdevs_discovered": 1, 00:15:17.085 "num_base_bdevs_operational": 2, 00:15:17.085 "base_bdevs_list": [ 00:15:17.085 { 00:15:17.085 "name": "pt1", 00:15:17.085 "uuid": "fd0e8ae2-271d-502c-a663-9300060cb817", 00:15:17.085 "is_configured": true, 00:15:17.085 "data_offset": 2048, 00:15:17.085 "data_size": 63488 00:15:17.085 }, 00:15:17.085 { 00:15:17.085 "name": null, 00:15:17.085 "uuid": "b66b10f8-5ae0-59c9-a80c-1b237b52e6b7", 00:15:17.085 "is_configured": false, 00:15:17.085 "data_offset": 2048, 00:15:17.085 "data_size": 63488 00:15:17.085 } 00:15:17.085 ] 00:15:17.085 }' 00:15:17.085 07:15:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:17.085 07:15:50 -- common/autotest_common.sh@10 -- # set +x 00:15:17.659 07:15:51 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:15:17.659 07:15:51 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:17.659 07:15:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:17.659 07:15:51 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:17.935 [2024-02-13 07:15:51.427304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:17.935 [2024-02-13 07:15:51.427446] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.935 [2024-02-13 07:15:51.427488] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:17.935 [2024-02-13 07:15:51.427513] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.935 [2024-02-13 07:15:51.428045] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.935 [2024-02-13 07:15:51.428085] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:17.935 [2024-02-13 07:15:51.428194] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:17.935 [2024-02-13 07:15:51.428225] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:17.935 [2024-02-13 07:15:51.428383] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:15:17.935 [2024-02-13 07:15:51.428396] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:17.935 [2024-02-13 07:15:51.428511] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:15:17.935 [2024-02-13 07:15:51.428836] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:15:17.935 [2024-02-13 07:15:51.428851] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:15:17.935 [2024-02-13 07:15:51.428987] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.935 pt2 00:15:17.935 07:15:51 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:17.935 07:15:51 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:17.935 07:15:51 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:17.935 07:15:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:17.935 07:15:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:17.935 07:15:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:17.935 07:15:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:17.935 07:15:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:17.935 07:15:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:17.935 07:15:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:17.935 07:15:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:17.935 07:15:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:17.935 07:15:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.935 07:15:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.194 07:15:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:18.194 "name": "raid_bdev1", 00:15:18.194 "uuid": "7d0d4696-9185-42ac-aa28-d4a7b0d76fee", 00:15:18.194 "strip_size_kb": 0, 00:15:18.194 "state": "online", 00:15:18.194 "raid_level": "raid1", 00:15:18.194 "superblock": true, 00:15:18.194 "num_base_bdevs": 2, 00:15:18.194 "num_base_bdevs_discovered": 2, 00:15:18.194 "num_base_bdevs_operational": 2, 00:15:18.194 "base_bdevs_list": [ 00:15:18.194 { 00:15:18.194 "name": "pt1", 00:15:18.194 "uuid": "fd0e8ae2-271d-502c-a663-9300060cb817", 00:15:18.194 "is_configured": true, 00:15:18.194 "data_offset": 2048, 00:15:18.194 "data_size": 63488 00:15:18.194 }, 00:15:18.194 { 00:15:18.194 "name": "pt2", 00:15:18.194 "uuid": "b66b10f8-5ae0-59c9-a80c-1b237b52e6b7", 00:15:18.194 "is_configured": true, 00:15:18.194 "data_offset": 2048, 00:15:18.194 "data_size": 63488 00:15:18.194 } 00:15:18.194 ] 00:15:18.194 }' 00:15:18.194 07:15:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:18.194 07:15:51 -- common/autotest_common.sh@10 -- # set +x 00:15:18.762 07:15:52 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:18.762 07:15:52 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:19.020 [2024-02-13 07:15:52.603755] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.020 07:15:52 -- bdev/bdev_raid.sh@430 -- # '[' 7d0d4696-9185-42ac-aa28-d4a7b0d76fee '!=' 7d0d4696-9185-42ac-aa28-d4a7b0d76fee ']' 00:15:19.020 07:15:52 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:15:19.020 07:15:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:19.020 07:15:52 -- bdev/bdev_raid.sh@196 -- # return 0 00:15:19.020 07:15:52 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:19.279 [2024-02-13 07:15:52.815566] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:19.279 07:15:52 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:19.279 07:15:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:19.279 07:15:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:19.279 07:15:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:19.279 07:15:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:19.279 07:15:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:19.279 07:15:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:19.279 07:15:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:19.279 07:15:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:19.279 07:15:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:19.279 07:15:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.279 07:15:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.538 07:15:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:19.538 "name": "raid_bdev1", 00:15:19.538 "uuid": "7d0d4696-9185-42ac-aa28-d4a7b0d76fee", 00:15:19.538 "strip_size_kb": 0, 00:15:19.538 "state": "online", 00:15:19.538 "raid_level": "raid1", 00:15:19.538 "superblock": true, 00:15:19.538 "num_base_bdevs": 2, 00:15:19.538 "num_base_bdevs_discovered": 1, 00:15:19.538 "num_base_bdevs_operational": 1, 00:15:19.538 "base_bdevs_list": [ 00:15:19.538 { 00:15:19.538 "name": null, 00:15:19.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.538 "is_configured": false, 00:15:19.538 "data_offset": 2048, 00:15:19.538 "data_size": 63488 00:15:19.538 }, 00:15:19.538 { 00:15:19.538 "name": "pt2", 00:15:19.538 "uuid": "b66b10f8-5ae0-59c9-a80c-1b237b52e6b7", 00:15:19.538 "is_configured": true, 00:15:19.538 "data_offset": 2048, 00:15:19.538 "data_size": 63488 00:15:19.538 } 00:15:19.538 ] 00:15:19.538 }' 00:15:19.538 07:15:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:19.538 07:15:53 -- common/autotest_common.sh@10 -- # set +x 00:15:20.105 07:15:53 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:20.365 [2024-02-13 07:15:53.999788] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.365 [2024-02-13 07:15:53.999832] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.365 [2024-02-13 07:15:53.999925] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.365 [2024-02-13 07:15:53.999984] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.365 [2024-02-13 07:15:53.999995] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:15:20.365 07:15:54 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.365 07:15:54 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:15:20.623 07:15:54 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:15:20.623 07:15:54 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:15:20.623 07:15:54 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:15:20.623 07:15:54 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:20.623 07:15:54 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:20.882 07:15:54 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:15:20.882 07:15:54 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:15:20.882 07:15:54 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:15:20.882 07:15:54 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:15:20.882 07:15:54 -- bdev/bdev_raid.sh@462 -- # i=1 00:15:20.882 07:15:54 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:21.141 [2024-02-13 07:15:54.707879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:21.141 [2024-02-13 07:15:54.708006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.141 [2024-02-13 07:15:54.708045] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:21.141 [2024-02-13 07:15:54.708079] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.141 [2024-02-13 07:15:54.710731] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.141 [2024-02-13 07:15:54.710788] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:21.141 [2024-02-13 07:15:54.710930] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:21.141 [2024-02-13 07:15:54.710994] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:21.141 [2024-02-13 07:15:54.711107] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:15:21.141 [2024-02-13 07:15:54.711128] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:21.141 [2024-02-13 07:15:54.711230] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ad0 00:15:21.141 [2024-02-13 07:15:54.711592] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:15:21.141 [2024-02-13 07:15:54.711614] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:15:21.141 [2024-02-13 07:15:54.711756] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.141 pt2 00:15:21.141 07:15:54 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:21.141 07:15:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:21.141 07:15:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:21.141 07:15:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:21.141 07:15:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:21.141 07:15:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:15:21.141 07:15:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:21.141 07:15:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:21.141 07:15:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:21.141 07:15:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:21.141 07:15:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.141 07:15:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.400 07:15:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:21.400 "name": "raid_bdev1", 00:15:21.400 "uuid": "7d0d4696-9185-42ac-aa28-d4a7b0d76fee", 00:15:21.400 "strip_size_kb": 0, 00:15:21.400 "state": "online", 00:15:21.400 "raid_level": "raid1", 00:15:21.400 "superblock": true, 00:15:21.400 "num_base_bdevs": 2, 00:15:21.400 "num_base_bdevs_discovered": 1, 00:15:21.400 "num_base_bdevs_operational": 1, 00:15:21.400 "base_bdevs_list": [ 00:15:21.400 { 00:15:21.400 "name": null, 00:15:21.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.400 "is_configured": false, 00:15:21.400 "data_offset": 2048, 00:15:21.400 "data_size": 63488 00:15:21.400 }, 00:15:21.400 { 00:15:21.400 "name": "pt2", 00:15:21.400 "uuid": "b66b10f8-5ae0-59c9-a80c-1b237b52e6b7", 00:15:21.400 "is_configured": true, 00:15:21.400 "data_offset": 2048, 00:15:21.400 "data_size": 63488 00:15:21.400 } 00:15:21.400 ] 00:15:21.400 }' 00:15:21.400 07:15:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:21.400 07:15:54 -- common/autotest_common.sh@10 -- # set +x 00:15:21.968 07:15:55 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:15:21.968 07:15:55 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:21.968 07:15:55 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:15:22.227 [2024-02-13 07:15:55.876372] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.227 07:15:55 -- bdev/bdev_raid.sh@506 -- # '[' 7d0d4696-9185-42ac-aa28-d4a7b0d76fee '!=' 7d0d4696-9185-42ac-aa28-d4a7b0d76fee ']' 00:15:22.227 07:15:55 -- bdev/bdev_raid.sh@511 -- # killprocess 118836 00:15:22.227 07:15:55 -- common/autotest_common.sh@924 -- # '[' -z 118836 ']' 00:15:22.227 07:15:55 -- common/autotest_common.sh@928 -- # kill -0 118836 00:15:22.227 07:15:55 -- common/autotest_common.sh@929 -- # uname 00:15:22.227 07:15:55 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:15:22.227 07:15:55 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 118836 00:15:22.227 killing process with pid 118836 00:15:22.227 07:15:55 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:15:22.227 07:15:55 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:15:22.227 07:15:55 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 118836' 00:15:22.227 07:15:55 -- common/autotest_common.sh@943 -- # kill 118836 00:15:22.227 07:15:55 -- common/autotest_common.sh@948 -- # wait 118836 00:15:22.227 [2024-02-13 07:15:55.910478] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.227 [2024-02-13 07:15:55.910576] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.227 [2024-02-13 07:15:55.910657] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.227 [2024-02-13 07:15:55.910675] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:15:22.487 [2024-02-13 07:15:56.056364] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:23.423 ************************************ 00:15:23.423 END TEST raid_superblock_test 00:15:23.423 ************************************ 00:15:23.424 07:15:57 -- bdev/bdev_raid.sh@513 -- # return 0 00:15:23.424 00:15:23.424 real 0m11.732s 00:15:23.424 user 0m21.080s 00:15:23.424 sys 0m1.316s 00:15:23.424 07:15:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:23.424 07:15:57 -- common/autotest_common.sh@10 -- # set +x 00:15:23.424 07:15:57 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:15:23.424 07:15:57 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:15:23.424 07:15:57 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:15:23.424 07:15:57 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:15:23.424 07:15:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:23.424 07:15:57 -- common/autotest_common.sh@10 -- # set +x 00:15:23.683 ************************************ 00:15:23.683 START TEST raid_state_function_test 00:15:23.683 ************************************ 00:15:23.683 07:15:57 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid0 3 false 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:23.683 Process raid pid: 119209 00:15:23.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@226 -- # raid_pid=119209 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119209' 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119209 /var/tmp/spdk-raid.sock 00:15:23.683 07:15:57 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:23.683 07:15:57 -- common/autotest_common.sh@817 -- # '[' -z 119209 ']' 00:15:23.683 07:15:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:23.683 07:15:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:23.683 07:15:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:23.683 07:15:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:23.683 07:15:57 -- common/autotest_common.sh@10 -- # set +x 00:15:23.683 [2024-02-13 07:15:57.199217] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:15:23.683 [2024-02-13 07:15:57.199420] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.683 [2024-02-13 07:15:57.368023] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.942 [2024-02-13 07:15:57.563923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.201 [2024-02-13 07:15:57.748569] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.768 07:15:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:24.768 07:15:58 -- common/autotest_common.sh@850 -- # return 0 00:15:24.768 07:15:58 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:24.768 [2024-02-13 07:15:58.403602] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.768 [2024-02-13 07:15:58.403716] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.768 [2024-02-13 07:15:58.403731] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.768 [2024-02-13 07:15:58.403751] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.768 [2024-02-13 07:15:58.403758] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.768 [2024-02-13 07:15:58.403802] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.768 07:15:58 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:24.768 07:15:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:24.768 07:15:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:24.768 07:15:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:24.768 07:15:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:24.768 07:15:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:24.768 07:15:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:24.768 07:15:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:24.768 07:15:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:24.768 07:15:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:24.768 07:15:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.768 07:15:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.027 07:15:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:25.027 "name": "Existed_Raid", 00:15:25.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.027 "strip_size_kb": 64, 00:15:25.027 "state": "configuring", 00:15:25.027 "raid_level": "raid0", 00:15:25.027 "superblock": false, 00:15:25.027 "num_base_bdevs": 3, 00:15:25.027 "num_base_bdevs_discovered": 0, 00:15:25.027 "num_base_bdevs_operational": 3, 00:15:25.027 "base_bdevs_list": [ 00:15:25.027 { 00:15:25.027 "name": "BaseBdev1", 00:15:25.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.027 "is_configured": false, 00:15:25.027 "data_offset": 0, 00:15:25.027 "data_size": 0 00:15:25.027 }, 00:15:25.027 { 00:15:25.027 "name": "BaseBdev2", 00:15:25.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.027 "is_configured": false, 00:15:25.027 "data_offset": 0, 00:15:25.027 "data_size": 0 00:15:25.027 }, 00:15:25.027 { 00:15:25.027 "name": "BaseBdev3", 00:15:25.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.027 "is_configured": false, 00:15:25.027 "data_offset": 0, 00:15:25.027 "data_size": 0 00:15:25.027 } 00:15:25.027 ] 00:15:25.027 }' 00:15:25.027 07:15:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:25.027 07:15:58 -- common/autotest_common.sh@10 -- # set +x 00:15:25.976 07:15:59 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:25.976 [2024-02-13 07:15:59.631737] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:25.976 [2024-02-13 07:15:59.631795] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:25.976 07:15:59 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:26.251 [2024-02-13 07:15:59.899791] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:26.251 [2024-02-13 07:15:59.899867] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:26.251 [2024-02-13 07:15:59.899895] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:26.251 [2024-02-13 07:15:59.899924] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:26.251 [2024-02-13 07:15:59.899932] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:26.251 [2024-02-13 07:15:59.899958] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:26.251 07:15:59 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:26.509 [2024-02-13 07:16:00.183377] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.509 BaseBdev1 00:15:26.767 07:16:00 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:26.767 07:16:00 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:15:26.767 07:16:00 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:26.767 07:16:00 -- common/autotest_common.sh@887 -- # local i 00:15:26.767 07:16:00 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:26.767 07:16:00 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:26.767 07:16:00 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:26.767 07:16:00 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:27.025 [ 00:15:27.026 { 00:15:27.026 "name": "BaseBdev1", 00:15:27.026 "aliases": [ 00:15:27.026 "64a75725-0b2a-4d51-b398-0f558a361f40" 00:15:27.026 ], 00:15:27.026 "product_name": "Malloc disk", 00:15:27.026 "block_size": 512, 00:15:27.026 "num_blocks": 65536, 00:15:27.026 "uuid": "64a75725-0b2a-4d51-b398-0f558a361f40", 00:15:27.026 "assigned_rate_limits": { 00:15:27.026 "rw_ios_per_sec": 0, 00:15:27.026 "rw_mbytes_per_sec": 0, 00:15:27.026 "r_mbytes_per_sec": 0, 00:15:27.026 "w_mbytes_per_sec": 0 00:15:27.026 }, 00:15:27.026 "claimed": true, 00:15:27.026 "claim_type": "exclusive_write", 00:15:27.026 "zoned": false, 00:15:27.026 "supported_io_types": { 00:15:27.026 "read": true, 00:15:27.026 "write": true, 00:15:27.026 "unmap": true, 00:15:27.026 "write_zeroes": true, 00:15:27.026 "flush": true, 00:15:27.026 "reset": true, 00:15:27.026 "compare": false, 00:15:27.026 "compare_and_write": false, 00:15:27.026 "abort": true, 00:15:27.026 "nvme_admin": false, 00:15:27.026 "nvme_io": false 00:15:27.026 }, 00:15:27.026 "memory_domains": [ 00:15:27.026 { 00:15:27.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.026 "dma_device_type": 2 00:15:27.026 } 00:15:27.026 ], 00:15:27.026 "driver_specific": {} 00:15:27.026 } 00:15:27.026 ] 00:15:27.026 07:16:00 -- common/autotest_common.sh@893 -- # return 0 00:15:27.026 07:16:00 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:27.026 07:16:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:27.026 07:16:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:27.026 07:16:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:27.026 07:16:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:27.026 07:16:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:27.026 07:16:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:27.026 07:16:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:27.026 07:16:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:27.026 07:16:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:27.026 07:16:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.026 07:16:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.284 07:16:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:27.284 "name": "Existed_Raid", 00:15:27.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.284 "strip_size_kb": 64, 00:15:27.284 "state": "configuring", 00:15:27.284 "raid_level": "raid0", 00:15:27.284 "superblock": false, 00:15:27.284 "num_base_bdevs": 3, 00:15:27.284 "num_base_bdevs_discovered": 1, 00:15:27.284 "num_base_bdevs_operational": 3, 00:15:27.284 "base_bdevs_list": [ 00:15:27.284 { 00:15:27.284 "name": "BaseBdev1", 00:15:27.284 "uuid": "64a75725-0b2a-4d51-b398-0f558a361f40", 00:15:27.284 "is_configured": true, 00:15:27.284 "data_offset": 0, 00:15:27.284 "data_size": 65536 00:15:27.284 }, 00:15:27.284 { 00:15:27.284 "name": "BaseBdev2", 00:15:27.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.284 "is_configured": false, 00:15:27.284 "data_offset": 0, 00:15:27.284 "data_size": 0 00:15:27.284 }, 00:15:27.284 { 00:15:27.284 "name": "BaseBdev3", 00:15:27.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.284 "is_configured": false, 00:15:27.284 "data_offset": 0, 00:15:27.284 "data_size": 0 00:15:27.284 } 00:15:27.284 ] 00:15:27.284 }' 00:15:27.285 07:16:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:27.285 07:16:00 -- common/autotest_common.sh@10 -- # set +x 00:15:27.852 07:16:01 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:28.111 [2024-02-13 07:16:01.743750] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:28.111 [2024-02-13 07:16:01.743822] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:28.111 07:16:01 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:15:28.111 07:16:01 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:28.370 [2024-02-13 07:16:01.939825] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.370 [2024-02-13 07:16:01.941662] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:28.370 [2024-02-13 07:16:01.941740] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:28.370 [2024-02-13 07:16:01.941768] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:28.370 [2024-02-13 07:16:01.941794] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:28.370 07:16:01 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:28.370 07:16:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:28.370 07:16:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:28.370 07:16:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:28.370 07:16:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:28.370 07:16:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:28.370 07:16:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:28.370 07:16:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:28.370 07:16:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:28.370 07:16:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:28.370 07:16:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:28.370 07:16:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:28.370 07:16:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.370 07:16:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.628 07:16:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:28.628 "name": "Existed_Raid", 00:15:28.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.628 "strip_size_kb": 64, 00:15:28.628 "state": "configuring", 00:15:28.628 "raid_level": "raid0", 00:15:28.628 "superblock": false, 00:15:28.628 "num_base_bdevs": 3, 00:15:28.628 "num_base_bdevs_discovered": 1, 00:15:28.628 "num_base_bdevs_operational": 3, 00:15:28.628 "base_bdevs_list": [ 00:15:28.628 { 00:15:28.628 "name": "BaseBdev1", 00:15:28.628 "uuid": "64a75725-0b2a-4d51-b398-0f558a361f40", 00:15:28.628 "is_configured": true, 00:15:28.628 "data_offset": 0, 00:15:28.628 "data_size": 65536 00:15:28.628 }, 00:15:28.628 { 00:15:28.628 "name": "BaseBdev2", 00:15:28.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.628 "is_configured": false, 00:15:28.628 "data_offset": 0, 00:15:28.628 "data_size": 0 00:15:28.628 }, 00:15:28.628 { 00:15:28.628 "name": "BaseBdev3", 00:15:28.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.628 "is_configured": false, 00:15:28.628 "data_offset": 0, 00:15:28.629 "data_size": 0 00:15:28.629 } 00:15:28.629 ] 00:15:28.629 }' 00:15:28.629 07:16:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:28.629 07:16:02 -- common/autotest_common.sh@10 -- # set +x 00:15:29.196 07:16:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:29.455 [2024-02-13 07:16:03.094296] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.455 BaseBdev2 00:15:29.455 07:16:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:29.455 07:16:03 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:15:29.455 07:16:03 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:29.455 07:16:03 -- common/autotest_common.sh@887 -- # local i 00:15:29.455 07:16:03 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:29.455 07:16:03 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:29.455 07:16:03 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:29.714 07:16:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:29.973 [ 00:15:29.973 { 00:15:29.973 "name": "BaseBdev2", 00:15:29.973 "aliases": [ 00:15:29.973 "0fa8ac26-e8ad-4fe0-bd5a-2eea05dfbb17" 00:15:29.973 ], 00:15:29.973 "product_name": "Malloc disk", 00:15:29.973 "block_size": 512, 00:15:29.973 "num_blocks": 65536, 00:15:29.973 "uuid": "0fa8ac26-e8ad-4fe0-bd5a-2eea05dfbb17", 00:15:29.973 "assigned_rate_limits": { 00:15:29.973 "rw_ios_per_sec": 0, 00:15:29.973 "rw_mbytes_per_sec": 0, 00:15:29.973 "r_mbytes_per_sec": 0, 00:15:29.973 "w_mbytes_per_sec": 0 00:15:29.973 }, 00:15:29.973 "claimed": true, 00:15:29.973 "claim_type": "exclusive_write", 00:15:29.973 "zoned": false, 00:15:29.973 "supported_io_types": { 00:15:29.973 "read": true, 00:15:29.973 "write": true, 00:15:29.973 "unmap": true, 00:15:29.973 "write_zeroes": true, 00:15:29.973 "flush": true, 00:15:29.973 "reset": true, 00:15:29.973 "compare": false, 00:15:29.973 "compare_and_write": false, 00:15:29.973 "abort": true, 00:15:29.973 "nvme_admin": false, 00:15:29.973 "nvme_io": false 00:15:29.973 }, 00:15:29.973 "memory_domains": [ 00:15:29.973 { 00:15:29.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.973 "dma_device_type": 2 00:15:29.973 } 00:15:29.973 ], 00:15:29.973 "driver_specific": {} 00:15:29.973 } 00:15:29.973 ] 00:15:29.973 07:16:03 -- common/autotest_common.sh@893 -- # return 0 00:15:29.973 07:16:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:29.973 07:16:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:29.973 07:16:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:29.973 07:16:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:29.973 07:16:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:29.973 07:16:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:29.973 07:16:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:29.973 07:16:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:29.973 07:16:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:29.973 07:16:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:29.973 07:16:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:29.973 07:16:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:29.973 07:16:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.973 07:16:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.233 07:16:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.233 "name": "Existed_Raid", 00:15:30.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.233 "strip_size_kb": 64, 00:15:30.233 "state": "configuring", 00:15:30.233 "raid_level": "raid0", 00:15:30.233 "superblock": false, 00:15:30.233 "num_base_bdevs": 3, 00:15:30.233 "num_base_bdevs_discovered": 2, 00:15:30.233 "num_base_bdevs_operational": 3, 00:15:30.233 "base_bdevs_list": [ 00:15:30.233 { 00:15:30.233 "name": "BaseBdev1", 00:15:30.233 "uuid": "64a75725-0b2a-4d51-b398-0f558a361f40", 00:15:30.233 "is_configured": true, 00:15:30.233 "data_offset": 0, 00:15:30.233 "data_size": 65536 00:15:30.233 }, 00:15:30.233 { 00:15:30.233 "name": "BaseBdev2", 00:15:30.233 "uuid": "0fa8ac26-e8ad-4fe0-bd5a-2eea05dfbb17", 00:15:30.233 "is_configured": true, 00:15:30.233 "data_offset": 0, 00:15:30.233 "data_size": 65536 00:15:30.233 }, 00:15:30.233 { 00:15:30.233 "name": "BaseBdev3", 00:15:30.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.233 "is_configured": false, 00:15:30.233 "data_offset": 0, 00:15:30.233 "data_size": 0 00:15:30.233 } 00:15:30.233 ] 00:15:30.233 }' 00:15:30.233 07:16:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.233 07:16:03 -- common/autotest_common.sh@10 -- # set +x 00:15:31.169 07:16:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:31.169 [2024-02-13 07:16:04.727670] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.169 [2024-02-13 07:16:04.727754] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:15:31.169 [2024-02-13 07:16:04.727764] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:31.169 [2024-02-13 07:16:04.727888] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:15:31.169 [2024-02-13 07:16:04.728322] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:15:31.169 [2024-02-13 07:16:04.728346] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:15:31.169 [2024-02-13 07:16:04.728649] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.169 BaseBdev3 00:15:31.169 07:16:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:31.169 07:16:04 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:15:31.169 07:16:04 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:31.169 07:16:04 -- common/autotest_common.sh@887 -- # local i 00:15:31.169 07:16:04 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:31.169 07:16:04 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:31.169 07:16:04 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:31.428 07:16:04 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:31.686 [ 00:15:31.686 { 00:15:31.686 "name": "BaseBdev3", 00:15:31.686 "aliases": [ 00:15:31.687 "af3959da-7bc8-4dad-a1cd-bd84af7f62cb" 00:15:31.687 ], 00:15:31.687 "product_name": "Malloc disk", 00:15:31.687 "block_size": 512, 00:15:31.687 "num_blocks": 65536, 00:15:31.687 "uuid": "af3959da-7bc8-4dad-a1cd-bd84af7f62cb", 00:15:31.687 "assigned_rate_limits": { 00:15:31.687 "rw_ios_per_sec": 0, 00:15:31.687 "rw_mbytes_per_sec": 0, 00:15:31.687 "r_mbytes_per_sec": 0, 00:15:31.687 "w_mbytes_per_sec": 0 00:15:31.687 }, 00:15:31.687 "claimed": true, 00:15:31.687 "claim_type": "exclusive_write", 00:15:31.687 "zoned": false, 00:15:31.687 "supported_io_types": { 00:15:31.687 "read": true, 00:15:31.687 "write": true, 00:15:31.687 "unmap": true, 00:15:31.687 "write_zeroes": true, 00:15:31.687 "flush": true, 00:15:31.687 "reset": true, 00:15:31.687 "compare": false, 00:15:31.687 "compare_and_write": false, 00:15:31.687 "abort": true, 00:15:31.687 "nvme_admin": false, 00:15:31.687 "nvme_io": false 00:15:31.687 }, 00:15:31.687 "memory_domains": [ 00:15:31.687 { 00:15:31.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.687 "dma_device_type": 2 00:15:31.687 } 00:15:31.687 ], 00:15:31.687 "driver_specific": {} 00:15:31.687 } 00:15:31.687 ] 00:15:31.687 07:16:05 -- common/autotest_common.sh@893 -- # return 0 00:15:31.687 07:16:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:31.687 07:16:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:31.687 07:16:05 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:31.687 07:16:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:31.687 07:16:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:31.687 07:16:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:31.687 07:16:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:31.687 07:16:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:31.687 07:16:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:31.687 07:16:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:31.687 07:16:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:31.687 07:16:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:31.687 07:16:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.687 07:16:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.946 07:16:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:31.946 "name": "Existed_Raid", 00:15:31.946 "uuid": "8b029941-4f39-447a-8807-9bfd34d59fd6", 00:15:31.946 "strip_size_kb": 64, 00:15:31.946 "state": "online", 00:15:31.946 "raid_level": "raid0", 00:15:31.946 "superblock": false, 00:15:31.946 "num_base_bdevs": 3, 00:15:31.946 "num_base_bdevs_discovered": 3, 00:15:31.946 "num_base_bdevs_operational": 3, 00:15:31.946 "base_bdevs_list": [ 00:15:31.946 { 00:15:31.946 "name": "BaseBdev1", 00:15:31.946 "uuid": "64a75725-0b2a-4d51-b398-0f558a361f40", 00:15:31.946 "is_configured": true, 00:15:31.946 "data_offset": 0, 00:15:31.946 "data_size": 65536 00:15:31.946 }, 00:15:31.946 { 00:15:31.946 "name": "BaseBdev2", 00:15:31.946 "uuid": "0fa8ac26-e8ad-4fe0-bd5a-2eea05dfbb17", 00:15:31.946 "is_configured": true, 00:15:31.946 "data_offset": 0, 00:15:31.946 "data_size": 65536 00:15:31.946 }, 00:15:31.946 { 00:15:31.946 "name": "BaseBdev3", 00:15:31.946 "uuid": "af3959da-7bc8-4dad-a1cd-bd84af7f62cb", 00:15:31.946 "is_configured": true, 00:15:31.946 "data_offset": 0, 00:15:31.946 "data_size": 65536 00:15:31.946 } 00:15:31.946 ] 00:15:31.946 }' 00:15:31.946 07:16:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:31.946 07:16:05 -- common/autotest_common.sh@10 -- # set +x 00:15:32.513 07:16:06 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:32.772 [2024-02-13 07:16:06.376077] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:32.772 [2024-02-13 07:16:06.376113] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:32.772 [2024-02-13 07:16:06.376188] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.031 07:16:06 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:33.031 07:16:06 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:33.031 07:16:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:33.031 07:16:06 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:33.031 07:16:06 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:33.031 07:16:06 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:33.031 07:16:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:33.031 07:16:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:33.031 07:16:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:33.031 07:16:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:33.031 07:16:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:33.031 07:16:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:33.031 07:16:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:33.031 07:16:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:33.031 07:16:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:33.031 07:16:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.031 07:16:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.290 07:16:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:33.290 "name": "Existed_Raid", 00:15:33.290 "uuid": "8b029941-4f39-447a-8807-9bfd34d59fd6", 00:15:33.290 "strip_size_kb": 64, 00:15:33.290 "state": "offline", 00:15:33.290 "raid_level": "raid0", 00:15:33.290 "superblock": false, 00:15:33.290 "num_base_bdevs": 3, 00:15:33.290 "num_base_bdevs_discovered": 2, 00:15:33.290 "num_base_bdevs_operational": 2, 00:15:33.290 "base_bdevs_list": [ 00:15:33.290 { 00:15:33.290 "name": null, 00:15:33.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.290 "is_configured": false, 00:15:33.290 "data_offset": 0, 00:15:33.290 "data_size": 65536 00:15:33.290 }, 00:15:33.290 { 00:15:33.290 "name": "BaseBdev2", 00:15:33.290 "uuid": "0fa8ac26-e8ad-4fe0-bd5a-2eea05dfbb17", 00:15:33.290 "is_configured": true, 00:15:33.290 "data_offset": 0, 00:15:33.290 "data_size": 65536 00:15:33.290 }, 00:15:33.290 { 00:15:33.290 "name": "BaseBdev3", 00:15:33.290 "uuid": "af3959da-7bc8-4dad-a1cd-bd84af7f62cb", 00:15:33.290 "is_configured": true, 00:15:33.290 "data_offset": 0, 00:15:33.290 "data_size": 65536 00:15:33.290 } 00:15:33.290 ] 00:15:33.290 }' 00:15:33.290 07:16:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:33.290 07:16:06 -- common/autotest_common.sh@10 -- # set +x 00:15:33.858 07:16:07 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:33.858 07:16:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:33.858 07:16:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.858 07:16:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:34.117 07:16:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:34.117 07:16:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:34.117 07:16:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:34.382 [2024-02-13 07:16:07.892883] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:34.382 07:16:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:34.382 07:16:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:34.382 07:16:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.382 07:16:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:34.641 07:16:08 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:34.641 07:16:08 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:34.641 07:16:08 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:34.900 [2024-02-13 07:16:08.447147] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:34.900 [2024-02-13 07:16:08.447247] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:15:34.900 07:16:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:34.900 07:16:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:34.900 07:16:08 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.900 07:16:08 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:35.159 07:16:08 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:35.160 07:16:08 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:35.160 07:16:08 -- bdev/bdev_raid.sh@287 -- # killprocess 119209 00:15:35.160 07:16:08 -- common/autotest_common.sh@924 -- # '[' -z 119209 ']' 00:15:35.160 07:16:08 -- common/autotest_common.sh@928 -- # kill -0 119209 00:15:35.160 07:16:08 -- common/autotest_common.sh@929 -- # uname 00:15:35.160 07:16:08 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:15:35.160 07:16:08 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 119209 00:15:35.160 killing process with pid 119209 00:15:35.160 07:16:08 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:15:35.160 07:16:08 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:15:35.160 07:16:08 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 119209' 00:15:35.160 07:16:08 -- common/autotest_common.sh@943 -- # kill 119209 00:15:35.160 07:16:08 -- common/autotest_common.sh@948 -- # wait 119209 00:15:35.160 [2024-02-13 07:16:08.760032] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:35.160 [2024-02-13 07:16:08.760196] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:36.538 ************************************ 00:15:36.538 END TEST raid_state_function_test 00:15:36.538 ************************************ 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:36.538 00:15:36.538 real 0m12.671s 00:15:36.538 user 0m22.590s 00:15:36.538 sys 0m1.470s 00:15:36.538 07:16:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:36.538 07:16:09 -- common/autotest_common.sh@10 -- # set +x 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:15:36.538 07:16:09 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:15:36.538 07:16:09 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:36.538 07:16:09 -- common/autotest_common.sh@10 -- # set +x 00:15:36.538 ************************************ 00:15:36.538 START TEST raid_state_function_test_sb 00:15:36.538 ************************************ 00:15:36.538 07:16:09 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid0 3 true 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@226 -- # raid_pid=119632 00:15:36.538 Process raid pid: 119632 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 119632' 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@228 -- # waitforlisten 119632 /var/tmp/spdk-raid.sock 00:15:36.538 07:16:09 -- common/autotest_common.sh@817 -- # '[' -z 119632 ']' 00:15:36.538 07:16:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:36.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:36.538 07:16:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:36.538 07:16:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:36.538 07:16:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:36.538 07:16:09 -- common/autotest_common.sh@10 -- # set +x 00:15:36.538 07:16:09 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:36.538 [2024-02-13 07:16:09.916955] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:15:36.538 [2024-02-13 07:16:09.917312] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.538 [2024-02-13 07:16:10.070643] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.797 [2024-02-13 07:16:10.257381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.797 [2024-02-13 07:16:10.444471] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.366 07:16:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:37.366 07:16:10 -- common/autotest_common.sh@850 -- # return 0 00:15:37.366 07:16:10 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:37.625 [2024-02-13 07:16:11.120328] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.625 [2024-02-13 07:16:11.120592] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.625 [2024-02-13 07:16:11.120711] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.625 [2024-02-13 07:16:11.120772] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.625 [2024-02-13 07:16:11.120978] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:37.625 [2024-02-13 07:16:11.121075] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:37.625 07:16:11 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:37.625 07:16:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:37.625 07:16:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:37.625 07:16:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:37.625 07:16:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:37.625 07:16:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:37.625 07:16:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:37.625 07:16:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:37.625 07:16:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:37.625 07:16:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:37.625 07:16:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.625 07:16:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.885 07:16:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:37.885 "name": "Existed_Raid", 00:15:37.885 "uuid": "1f8893e4-7b43-4cd3-a946-b3fd98908040", 00:15:37.885 "strip_size_kb": 64, 00:15:37.885 "state": "configuring", 00:15:37.885 "raid_level": "raid0", 00:15:37.885 "superblock": true, 00:15:37.885 "num_base_bdevs": 3, 00:15:37.885 "num_base_bdevs_discovered": 0, 00:15:37.885 "num_base_bdevs_operational": 3, 00:15:37.885 "base_bdevs_list": [ 00:15:37.885 { 00:15:37.885 "name": "BaseBdev1", 00:15:37.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.885 "is_configured": false, 00:15:37.885 "data_offset": 0, 00:15:37.885 "data_size": 0 00:15:37.885 }, 00:15:37.885 { 00:15:37.885 "name": "BaseBdev2", 00:15:37.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.885 "is_configured": false, 00:15:37.885 "data_offset": 0, 00:15:37.885 "data_size": 0 00:15:37.885 }, 00:15:37.885 { 00:15:37.885 "name": "BaseBdev3", 00:15:37.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.885 "is_configured": false, 00:15:37.885 "data_offset": 0, 00:15:37.885 "data_size": 0 00:15:37.885 } 00:15:37.885 ] 00:15:37.885 }' 00:15:37.885 07:16:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:37.885 07:16:11 -- common/autotest_common.sh@10 -- # set +x 00:15:38.453 07:16:12 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:38.712 [2024-02-13 07:16:12.300439] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:38.712 [2024-02-13 07:16:12.300641] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:15:38.713 07:16:12 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:38.972 [2024-02-13 07:16:12.504553] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:38.972 [2024-02-13 07:16:12.504798] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:38.972 [2024-02-13 07:16:12.504918] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:38.972 [2024-02-13 07:16:12.504984] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:38.972 [2024-02-13 07:16:12.505174] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:38.972 [2024-02-13 07:16:12.505242] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:38.972 07:16:12 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:39.230 [2024-02-13 07:16:12.759353] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.230 BaseBdev1 00:15:39.231 07:16:12 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:15:39.231 07:16:12 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:15:39.231 07:16:12 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:39.231 07:16:12 -- common/autotest_common.sh@887 -- # local i 00:15:39.231 07:16:12 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:39.231 07:16:12 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:39.231 07:16:12 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:39.490 07:16:12 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:39.490 [ 00:15:39.490 { 00:15:39.490 "name": "BaseBdev1", 00:15:39.490 "aliases": [ 00:15:39.490 "c273c8fa-e842-4d33-a2a5-a220e24eccc9" 00:15:39.490 ], 00:15:39.490 "product_name": "Malloc disk", 00:15:39.490 "block_size": 512, 00:15:39.490 "num_blocks": 65536, 00:15:39.490 "uuid": "c273c8fa-e842-4d33-a2a5-a220e24eccc9", 00:15:39.490 "assigned_rate_limits": { 00:15:39.490 "rw_ios_per_sec": 0, 00:15:39.490 "rw_mbytes_per_sec": 0, 00:15:39.490 "r_mbytes_per_sec": 0, 00:15:39.490 "w_mbytes_per_sec": 0 00:15:39.490 }, 00:15:39.490 "claimed": true, 00:15:39.490 "claim_type": "exclusive_write", 00:15:39.490 "zoned": false, 00:15:39.490 "supported_io_types": { 00:15:39.490 "read": true, 00:15:39.490 "write": true, 00:15:39.490 "unmap": true, 00:15:39.490 "write_zeroes": true, 00:15:39.490 "flush": true, 00:15:39.490 "reset": true, 00:15:39.490 "compare": false, 00:15:39.490 "compare_and_write": false, 00:15:39.490 "abort": true, 00:15:39.490 "nvme_admin": false, 00:15:39.490 "nvme_io": false 00:15:39.490 }, 00:15:39.490 "memory_domains": [ 00:15:39.490 { 00:15:39.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.490 "dma_device_type": 2 00:15:39.490 } 00:15:39.490 ], 00:15:39.490 "driver_specific": {} 00:15:39.490 } 00:15:39.490 ] 00:15:39.749 07:16:13 -- common/autotest_common.sh@893 -- # return 0 00:15:39.749 07:16:13 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:39.749 07:16:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:39.749 07:16:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:39.749 07:16:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:39.749 07:16:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:39.749 07:16:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:39.749 07:16:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:39.749 07:16:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:39.749 07:16:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:39.749 07:16:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:39.749 07:16:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.749 07:16:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.749 07:16:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:39.749 "name": "Existed_Raid", 00:15:39.749 "uuid": "328faa77-526f-4295-955e-8c69f631443a", 00:15:39.749 "strip_size_kb": 64, 00:15:39.749 "state": "configuring", 00:15:39.749 "raid_level": "raid0", 00:15:39.749 "superblock": true, 00:15:39.749 "num_base_bdevs": 3, 00:15:39.749 "num_base_bdevs_discovered": 1, 00:15:39.749 "num_base_bdevs_operational": 3, 00:15:39.749 "base_bdevs_list": [ 00:15:39.749 { 00:15:39.749 "name": "BaseBdev1", 00:15:39.749 "uuid": "c273c8fa-e842-4d33-a2a5-a220e24eccc9", 00:15:39.749 "is_configured": true, 00:15:39.749 "data_offset": 2048, 00:15:39.749 "data_size": 63488 00:15:39.749 }, 00:15:39.749 { 00:15:39.749 "name": "BaseBdev2", 00:15:39.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.749 "is_configured": false, 00:15:39.749 "data_offset": 0, 00:15:39.749 "data_size": 0 00:15:39.749 }, 00:15:39.749 { 00:15:39.749 "name": "BaseBdev3", 00:15:39.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.749 "is_configured": false, 00:15:39.749 "data_offset": 0, 00:15:39.749 "data_size": 0 00:15:39.749 } 00:15:39.749 ] 00:15:39.749 }' 00:15:39.749 07:16:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:39.749 07:16:13 -- common/autotest_common.sh@10 -- # set +x 00:15:40.685 07:16:14 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:40.685 [2024-02-13 07:16:14.307684] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:40.685 [2024-02-13 07:16:14.307924] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:15:40.685 07:16:14 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:15:40.685 07:16:14 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:40.945 07:16:14 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:41.203 BaseBdev1 00:15:41.462 07:16:14 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:15:41.462 07:16:14 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:15:41.462 07:16:14 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:41.462 07:16:14 -- common/autotest_common.sh@887 -- # local i 00:15:41.462 07:16:14 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:41.462 07:16:14 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:41.462 07:16:14 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:41.462 07:16:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:41.720 [ 00:15:41.720 { 00:15:41.720 "name": "BaseBdev1", 00:15:41.720 "aliases": [ 00:15:41.720 "12775755-c1cd-4543-a212-51c26768ba39" 00:15:41.720 ], 00:15:41.720 "product_name": "Malloc disk", 00:15:41.720 "block_size": 512, 00:15:41.720 "num_blocks": 65536, 00:15:41.720 "uuid": "12775755-c1cd-4543-a212-51c26768ba39", 00:15:41.720 "assigned_rate_limits": { 00:15:41.720 "rw_ios_per_sec": 0, 00:15:41.720 "rw_mbytes_per_sec": 0, 00:15:41.720 "r_mbytes_per_sec": 0, 00:15:41.720 "w_mbytes_per_sec": 0 00:15:41.720 }, 00:15:41.720 "claimed": false, 00:15:41.720 "zoned": false, 00:15:41.720 "supported_io_types": { 00:15:41.720 "read": true, 00:15:41.720 "write": true, 00:15:41.720 "unmap": true, 00:15:41.720 "write_zeroes": true, 00:15:41.720 "flush": true, 00:15:41.720 "reset": true, 00:15:41.720 "compare": false, 00:15:41.720 "compare_and_write": false, 00:15:41.720 "abort": true, 00:15:41.720 "nvme_admin": false, 00:15:41.720 "nvme_io": false 00:15:41.720 }, 00:15:41.720 "memory_domains": [ 00:15:41.720 { 00:15:41.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.720 "dma_device_type": 2 00:15:41.720 } 00:15:41.720 ], 00:15:41.720 "driver_specific": {} 00:15:41.720 } 00:15:41.720 ] 00:15:41.720 07:16:15 -- common/autotest_common.sh@893 -- # return 0 00:15:41.720 07:16:15 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:15:41.979 [2024-02-13 07:16:15.504198] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:41.979 [2024-02-13 07:16:15.506145] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:41.979 [2024-02-13 07:16:15.506358] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:41.979 [2024-02-13 07:16:15.506504] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:41.979 [2024-02-13 07:16:15.506656] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:41.979 07:16:15 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:15:41.979 07:16:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:41.979 07:16:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:41.979 07:16:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:41.979 07:16:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:41.979 07:16:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:41.979 07:16:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:41.979 07:16:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:41.979 07:16:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:41.979 07:16:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:41.979 07:16:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:41.979 07:16:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:41.979 07:16:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.979 07:16:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.237 07:16:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:42.237 "name": "Existed_Raid", 00:15:42.237 "uuid": "a8964faa-0d49-4c41-b539-c262e49e7a46", 00:15:42.237 "strip_size_kb": 64, 00:15:42.237 "state": "configuring", 00:15:42.237 "raid_level": "raid0", 00:15:42.237 "superblock": true, 00:15:42.237 "num_base_bdevs": 3, 00:15:42.237 "num_base_bdevs_discovered": 1, 00:15:42.237 "num_base_bdevs_operational": 3, 00:15:42.237 "base_bdevs_list": [ 00:15:42.237 { 00:15:42.237 "name": "BaseBdev1", 00:15:42.237 "uuid": "12775755-c1cd-4543-a212-51c26768ba39", 00:15:42.237 "is_configured": true, 00:15:42.237 "data_offset": 2048, 00:15:42.237 "data_size": 63488 00:15:42.237 }, 00:15:42.237 { 00:15:42.237 "name": "BaseBdev2", 00:15:42.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.237 "is_configured": false, 00:15:42.237 "data_offset": 0, 00:15:42.237 "data_size": 0 00:15:42.237 }, 00:15:42.237 { 00:15:42.237 "name": "BaseBdev3", 00:15:42.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.238 "is_configured": false, 00:15:42.238 "data_offset": 0, 00:15:42.238 "data_size": 0 00:15:42.238 } 00:15:42.238 ] 00:15:42.238 }' 00:15:42.238 07:16:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:42.238 07:16:15 -- common/autotest_common.sh@10 -- # set +x 00:15:42.852 07:16:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:43.111 [2024-02-13 07:16:16.714587] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.111 BaseBdev2 00:15:43.111 07:16:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:15:43.111 07:16:16 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:15:43.111 07:16:16 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:43.111 07:16:16 -- common/autotest_common.sh@887 -- # local i 00:15:43.111 07:16:16 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:43.111 07:16:16 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:43.111 07:16:16 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:43.371 07:16:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:43.630 [ 00:15:43.630 { 00:15:43.630 "name": "BaseBdev2", 00:15:43.630 "aliases": [ 00:15:43.630 "3c4c4259-5ec5-41e4-b71e-a8fae3755dd1" 00:15:43.630 ], 00:15:43.630 "product_name": "Malloc disk", 00:15:43.630 "block_size": 512, 00:15:43.630 "num_blocks": 65536, 00:15:43.630 "uuid": "3c4c4259-5ec5-41e4-b71e-a8fae3755dd1", 00:15:43.630 "assigned_rate_limits": { 00:15:43.630 "rw_ios_per_sec": 0, 00:15:43.630 "rw_mbytes_per_sec": 0, 00:15:43.630 "r_mbytes_per_sec": 0, 00:15:43.630 "w_mbytes_per_sec": 0 00:15:43.630 }, 00:15:43.630 "claimed": true, 00:15:43.630 "claim_type": "exclusive_write", 00:15:43.630 "zoned": false, 00:15:43.630 "supported_io_types": { 00:15:43.630 "read": true, 00:15:43.630 "write": true, 00:15:43.630 "unmap": true, 00:15:43.630 "write_zeroes": true, 00:15:43.630 "flush": true, 00:15:43.630 "reset": true, 00:15:43.630 "compare": false, 00:15:43.630 "compare_and_write": false, 00:15:43.630 "abort": true, 00:15:43.630 "nvme_admin": false, 00:15:43.630 "nvme_io": false 00:15:43.630 }, 00:15:43.630 "memory_domains": [ 00:15:43.630 { 00:15:43.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.630 "dma_device_type": 2 00:15:43.630 } 00:15:43.630 ], 00:15:43.630 "driver_specific": {} 00:15:43.630 } 00:15:43.630 ] 00:15:43.630 07:16:17 -- common/autotest_common.sh@893 -- # return 0 00:15:43.630 07:16:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:43.630 07:16:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:43.630 07:16:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:15:43.630 07:16:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:43.630 07:16:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:43.630 07:16:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:43.630 07:16:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:43.630 07:16:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:43.630 07:16:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:43.630 07:16:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:43.630 07:16:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:43.630 07:16:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:43.630 07:16:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.630 07:16:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.889 07:16:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:43.889 "name": "Existed_Raid", 00:15:43.889 "uuid": "a8964faa-0d49-4c41-b539-c262e49e7a46", 00:15:43.889 "strip_size_kb": 64, 00:15:43.889 "state": "configuring", 00:15:43.889 "raid_level": "raid0", 00:15:43.889 "superblock": true, 00:15:43.889 "num_base_bdevs": 3, 00:15:43.889 "num_base_bdevs_discovered": 2, 00:15:43.889 "num_base_bdevs_operational": 3, 00:15:43.889 "base_bdevs_list": [ 00:15:43.889 { 00:15:43.889 "name": "BaseBdev1", 00:15:43.889 "uuid": "12775755-c1cd-4543-a212-51c26768ba39", 00:15:43.889 "is_configured": true, 00:15:43.889 "data_offset": 2048, 00:15:43.889 "data_size": 63488 00:15:43.889 }, 00:15:43.889 { 00:15:43.889 "name": "BaseBdev2", 00:15:43.889 "uuid": "3c4c4259-5ec5-41e4-b71e-a8fae3755dd1", 00:15:43.889 "is_configured": true, 00:15:43.889 "data_offset": 2048, 00:15:43.889 "data_size": 63488 00:15:43.889 }, 00:15:43.889 { 00:15:43.889 "name": "BaseBdev3", 00:15:43.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.889 "is_configured": false, 00:15:43.889 "data_offset": 0, 00:15:43.889 "data_size": 0 00:15:43.889 } 00:15:43.889 ] 00:15:43.889 }' 00:15:43.889 07:16:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:43.889 07:16:17 -- common/autotest_common.sh@10 -- # set +x 00:15:44.456 07:16:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:44.715 [2024-02-13 07:16:18.291328] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.715 [2024-02-13 07:16:18.291833] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:15:44.715 [2024-02-13 07:16:18.292001] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:44.715 BaseBdev3 00:15:44.715 [2024-02-13 07:16:18.292155] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:15:44.715 [2024-02-13 07:16:18.292509] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:15:44.715 [2024-02-13 07:16:18.292676] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:15:44.715 [2024-02-13 07:16:18.292959] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.715 07:16:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:15:44.715 07:16:18 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:15:44.715 07:16:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:44.715 07:16:18 -- common/autotest_common.sh@887 -- # local i 00:15:44.715 07:16:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:44.715 07:16:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:44.715 07:16:18 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:44.973 07:16:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:45.232 [ 00:15:45.232 { 00:15:45.232 "name": "BaseBdev3", 00:15:45.232 "aliases": [ 00:15:45.232 "94c98de5-c5c1-437b-9113-6cb23098dfe8" 00:15:45.232 ], 00:15:45.232 "product_name": "Malloc disk", 00:15:45.232 "block_size": 512, 00:15:45.232 "num_blocks": 65536, 00:15:45.232 "uuid": "94c98de5-c5c1-437b-9113-6cb23098dfe8", 00:15:45.232 "assigned_rate_limits": { 00:15:45.232 "rw_ios_per_sec": 0, 00:15:45.232 "rw_mbytes_per_sec": 0, 00:15:45.232 "r_mbytes_per_sec": 0, 00:15:45.232 "w_mbytes_per_sec": 0 00:15:45.232 }, 00:15:45.232 "claimed": true, 00:15:45.232 "claim_type": "exclusive_write", 00:15:45.232 "zoned": false, 00:15:45.232 "supported_io_types": { 00:15:45.232 "read": true, 00:15:45.232 "write": true, 00:15:45.232 "unmap": true, 00:15:45.232 "write_zeroes": true, 00:15:45.232 "flush": true, 00:15:45.232 "reset": true, 00:15:45.232 "compare": false, 00:15:45.232 "compare_and_write": false, 00:15:45.232 "abort": true, 00:15:45.232 "nvme_admin": false, 00:15:45.232 "nvme_io": false 00:15:45.232 }, 00:15:45.232 "memory_domains": [ 00:15:45.232 { 00:15:45.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.232 "dma_device_type": 2 00:15:45.232 } 00:15:45.232 ], 00:15:45.232 "driver_specific": {} 00:15:45.232 } 00:15:45.232 ] 00:15:45.232 07:16:18 -- common/autotest_common.sh@893 -- # return 0 00:15:45.232 07:16:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:15:45.232 07:16:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:15:45.232 07:16:18 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:15:45.232 07:16:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:45.232 07:16:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:45.232 07:16:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:45.232 07:16:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:45.232 07:16:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:45.232 07:16:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:45.232 07:16:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:45.232 07:16:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:45.232 07:16:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:45.232 07:16:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.232 07:16:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.491 07:16:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:45.491 "name": "Existed_Raid", 00:15:45.491 "uuid": "a8964faa-0d49-4c41-b539-c262e49e7a46", 00:15:45.491 "strip_size_kb": 64, 00:15:45.491 "state": "online", 00:15:45.491 "raid_level": "raid0", 00:15:45.491 "superblock": true, 00:15:45.491 "num_base_bdevs": 3, 00:15:45.491 "num_base_bdevs_discovered": 3, 00:15:45.491 "num_base_bdevs_operational": 3, 00:15:45.491 "base_bdevs_list": [ 00:15:45.491 { 00:15:45.491 "name": "BaseBdev1", 00:15:45.491 "uuid": "12775755-c1cd-4543-a212-51c26768ba39", 00:15:45.491 "is_configured": true, 00:15:45.491 "data_offset": 2048, 00:15:45.491 "data_size": 63488 00:15:45.491 }, 00:15:45.491 { 00:15:45.491 "name": "BaseBdev2", 00:15:45.491 "uuid": "3c4c4259-5ec5-41e4-b71e-a8fae3755dd1", 00:15:45.491 "is_configured": true, 00:15:45.491 "data_offset": 2048, 00:15:45.491 "data_size": 63488 00:15:45.491 }, 00:15:45.491 { 00:15:45.491 "name": "BaseBdev3", 00:15:45.491 "uuid": "94c98de5-c5c1-437b-9113-6cb23098dfe8", 00:15:45.491 "is_configured": true, 00:15:45.491 "data_offset": 2048, 00:15:45.491 "data_size": 63488 00:15:45.491 } 00:15:45.491 ] 00:15:45.491 }' 00:15:45.491 07:16:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:45.491 07:16:18 -- common/autotest_common.sh@10 -- # set +x 00:15:46.057 07:16:19 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:46.316 [2024-02-13 07:16:19.765538] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:46.316 [2024-02-13 07:16:19.765739] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.316 [2024-02-13 07:16:19.765919] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.316 07:16:19 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:15:46.316 07:16:19 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:15:46.316 07:16:19 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:46.316 07:16:19 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:46.316 07:16:19 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:15:46.316 07:16:19 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:15:46.316 07:16:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:15:46.316 07:16:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:15:46.316 07:16:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:46.316 07:16:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:46.316 07:16:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:15:46.316 07:16:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:46.316 07:16:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:46.316 07:16:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:46.316 07:16:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:46.316 07:16:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.316 07:16:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.575 07:16:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:46.575 "name": "Existed_Raid", 00:15:46.575 "uuid": "a8964faa-0d49-4c41-b539-c262e49e7a46", 00:15:46.575 "strip_size_kb": 64, 00:15:46.575 "state": "offline", 00:15:46.575 "raid_level": "raid0", 00:15:46.575 "superblock": true, 00:15:46.575 "num_base_bdevs": 3, 00:15:46.575 "num_base_bdevs_discovered": 2, 00:15:46.575 "num_base_bdevs_operational": 2, 00:15:46.575 "base_bdevs_list": [ 00:15:46.575 { 00:15:46.575 "name": null, 00:15:46.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.575 "is_configured": false, 00:15:46.575 "data_offset": 2048, 00:15:46.575 "data_size": 63488 00:15:46.575 }, 00:15:46.575 { 00:15:46.575 "name": "BaseBdev2", 00:15:46.575 "uuid": "3c4c4259-5ec5-41e4-b71e-a8fae3755dd1", 00:15:46.575 "is_configured": true, 00:15:46.575 "data_offset": 2048, 00:15:46.575 "data_size": 63488 00:15:46.575 }, 00:15:46.575 { 00:15:46.575 "name": "BaseBdev3", 00:15:46.575 "uuid": "94c98de5-c5c1-437b-9113-6cb23098dfe8", 00:15:46.575 "is_configured": true, 00:15:46.575 "data_offset": 2048, 00:15:46.575 "data_size": 63488 00:15:46.575 } 00:15:46.575 ] 00:15:46.575 }' 00:15:46.575 07:16:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:46.575 07:16:20 -- common/autotest_common.sh@10 -- # set +x 00:15:47.511 07:16:20 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:15:47.511 07:16:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:47.511 07:16:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.511 07:16:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:47.511 07:16:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:47.511 07:16:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:47.511 07:16:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:47.770 [2024-02-13 07:16:21.320153] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:47.770 07:16:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:47.770 07:16:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:47.770 07:16:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.770 07:16:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:15:48.029 07:16:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:15:48.029 07:16:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:48.029 07:16:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:48.288 [2024-02-13 07:16:21.774164] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:48.288 [2024-02-13 07:16:21.774401] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:15:48.288 07:16:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:15:48.288 07:16:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:15:48.288 07:16:21 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.288 07:16:21 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:15:48.546 07:16:22 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:15:48.546 07:16:22 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:15:48.546 07:16:22 -- bdev/bdev_raid.sh@287 -- # killprocess 119632 00:15:48.546 07:16:22 -- common/autotest_common.sh@924 -- # '[' -z 119632 ']' 00:15:48.546 07:16:22 -- common/autotest_common.sh@928 -- # kill -0 119632 00:15:48.546 07:16:22 -- common/autotest_common.sh@929 -- # uname 00:15:48.546 07:16:22 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:15:48.546 07:16:22 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 119632 00:15:48.546 killing process with pid 119632 00:15:48.546 07:16:22 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:15:48.546 07:16:22 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:15:48.546 07:16:22 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 119632' 00:15:48.546 07:16:22 -- common/autotest_common.sh@943 -- # kill 119632 00:15:48.546 07:16:22 -- common/autotest_common.sh@948 -- # wait 119632 00:15:48.546 [2024-02-13 07:16:22.078034] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:48.546 [2024-02-13 07:16:22.078202] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.482 ************************************ 00:15:49.482 END TEST raid_state_function_test_sb 00:15:49.482 ************************************ 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@289 -- # return 0 00:15:49.482 00:15:49.482 real 0m13.199s 00:15:49.482 user 0m23.594s 00:15:49.482 sys 0m1.460s 00:15:49.482 07:16:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:49.482 07:16:23 -- common/autotest_common.sh@10 -- # set +x 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:15:49.482 07:16:23 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:15:49.482 07:16:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:15:49.482 07:16:23 -- common/autotest_common.sh@10 -- # set +x 00:15:49.482 ************************************ 00:15:49.482 START TEST raid_superblock_test 00:15:49.482 ************************************ 00:15:49.482 07:16:23 -- common/autotest_common.sh@1102 -- # raid_superblock_test raid0 3 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@357 -- # raid_pid=120040 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@358 -- # waitforlisten 120040 /var/tmp/spdk-raid.sock 00:15:49.482 07:16:23 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:49.482 07:16:23 -- common/autotest_common.sh@817 -- # '[' -z 120040 ']' 00:15:49.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:49.482 07:16:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:49.482 07:16:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:49.482 07:16:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:49.482 07:16:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:49.482 07:16:23 -- common/autotest_common.sh@10 -- # set +x 00:15:49.741 [2024-02-13 07:16:23.179323] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:15:49.741 [2024-02-13 07:16:23.179517] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid120040 ] 00:15:49.741 [2024-02-13 07:16:23.348227] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.999 [2024-02-13 07:16:23.531142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.258 [2024-02-13 07:16:23.709934] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.516 07:16:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:50.516 07:16:24 -- common/autotest_common.sh@850 -- # return 0 00:15:50.516 07:16:24 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:15:50.516 07:16:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:50.516 07:16:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:15:50.516 07:16:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:15:50.516 07:16:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:50.516 07:16:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:50.516 07:16:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:50.516 07:16:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:50.516 07:16:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:50.775 malloc1 00:15:50.775 07:16:24 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:51.034 [2024-02-13 07:16:24.491379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:51.034 [2024-02-13 07:16:24.491497] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.034 [2024-02-13 07:16:24.491530] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:15:51.034 [2024-02-13 07:16:24.491576] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.034 [2024-02-13 07:16:24.493687] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.034 [2024-02-13 07:16:24.493737] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:51.034 pt1 00:15:51.034 07:16:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:51.034 07:16:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:51.034 07:16:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:15:51.034 07:16:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:15:51.034 07:16:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:51.034 07:16:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:51.034 07:16:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:51.034 07:16:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:51.034 07:16:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:51.293 malloc2 00:15:51.293 07:16:24 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:51.293 [2024-02-13 07:16:24.972454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:51.293 [2024-02-13 07:16:24.972564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.293 [2024-02-13 07:16:24.972608] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:15:51.293 [2024-02-13 07:16:24.972662] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.293 [2024-02-13 07:16:24.974738] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.293 [2024-02-13 07:16:24.974803] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:51.293 pt2 00:15:51.552 07:16:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:51.552 07:16:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:51.552 07:16:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:15:51.552 07:16:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:15:51.552 07:16:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:51.552 07:16:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:51.552 07:16:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:15:51.552 07:16:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:51.552 07:16:24 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:51.552 malloc3 00:15:51.552 07:16:25 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:51.811 [2024-02-13 07:16:25.385747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:51.811 [2024-02-13 07:16:25.385860] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.811 [2024-02-13 07:16:25.385902] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:51.811 [2024-02-13 07:16:25.385949] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.811 [2024-02-13 07:16:25.388002] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.811 [2024-02-13 07:16:25.388068] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:51.811 pt3 00:15:51.811 07:16:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:15:51.811 07:16:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:15:51.811 07:16:25 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:15:52.070 [2024-02-13 07:16:25.573803] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:52.070 [2024-02-13 07:16:25.575506] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:52.070 [2024-02-13 07:16:25.575593] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:52.070 [2024-02-13 07:16:25.575818] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:15:52.070 [2024-02-13 07:16:25.575834] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:52.070 [2024-02-13 07:16:25.575968] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:15:52.070 [2024-02-13 07:16:25.576349] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:15:52.070 [2024-02-13 07:16:25.576374] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:15:52.070 [2024-02-13 07:16:25.576547] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.070 07:16:25 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:52.070 07:16:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:52.070 07:16:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:52.070 07:16:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:52.070 07:16:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:52.070 07:16:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:52.070 07:16:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:52.070 07:16:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:52.070 07:16:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:52.070 07:16:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:52.070 07:16:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.070 07:16:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.329 07:16:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:52.329 "name": "raid_bdev1", 00:15:52.329 "uuid": "60be44ae-f747-4ceb-a96a-85d4b5481a8a", 00:15:52.329 "strip_size_kb": 64, 00:15:52.329 "state": "online", 00:15:52.329 "raid_level": "raid0", 00:15:52.329 "superblock": true, 00:15:52.329 "num_base_bdevs": 3, 00:15:52.329 "num_base_bdevs_discovered": 3, 00:15:52.329 "num_base_bdevs_operational": 3, 00:15:52.329 "base_bdevs_list": [ 00:15:52.329 { 00:15:52.329 "name": "pt1", 00:15:52.329 "uuid": "3b5dce50-b54f-5621-a289-709b1571ae53", 00:15:52.329 "is_configured": true, 00:15:52.329 "data_offset": 2048, 00:15:52.329 "data_size": 63488 00:15:52.329 }, 00:15:52.330 { 00:15:52.330 "name": "pt2", 00:15:52.330 "uuid": "02221f35-cf2b-58e9-b6bc-101437b73325", 00:15:52.330 "is_configured": true, 00:15:52.330 "data_offset": 2048, 00:15:52.330 "data_size": 63488 00:15:52.330 }, 00:15:52.330 { 00:15:52.330 "name": "pt3", 00:15:52.330 "uuid": "27a87623-d249-5770-a6b2-d757c879ee16", 00:15:52.330 "is_configured": true, 00:15:52.330 "data_offset": 2048, 00:15:52.330 "data_size": 63488 00:15:52.330 } 00:15:52.330 ] 00:15:52.330 }' 00:15:52.330 07:16:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:52.330 07:16:25 -- common/autotest_common.sh@10 -- # set +x 00:15:52.897 07:16:26 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:52.897 07:16:26 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:15:53.156 [2024-02-13 07:16:26.674138] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.156 07:16:26 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=60be44ae-f747-4ceb-a96a-85d4b5481a8a 00:15:53.156 07:16:26 -- bdev/bdev_raid.sh@380 -- # '[' -z 60be44ae-f747-4ceb-a96a-85d4b5481a8a ']' 00:15:53.156 07:16:26 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:53.415 [2024-02-13 07:16:26.914058] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.415 [2024-02-13 07:16:26.914095] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.415 [2024-02-13 07:16:26.914209] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.415 [2024-02-13 07:16:26.914321] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.415 [2024-02-13 07:16:26.914365] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:15:53.415 07:16:26 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.415 07:16:26 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:15:53.674 07:16:27 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:15:53.674 07:16:27 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:15:53.674 07:16:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:53.674 07:16:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:53.674 07:16:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:53.674 07:16:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:53.933 07:16:27 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:15:53.933 07:16:27 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:54.191 07:16:27 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:54.191 07:16:27 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:54.451 07:16:27 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:15:54.451 07:16:27 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:54.451 07:16:27 -- common/autotest_common.sh@638 -- # local es=0 00:15:54.451 07:16:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:54.451 07:16:27 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:54.451 07:16:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:54.451 07:16:27 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:54.451 07:16:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:54.451 07:16:27 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:54.451 07:16:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:54.451 07:16:27 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:54.451 07:16:27 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:54.451 07:16:27 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:15:54.710 [2024-02-13 07:16:28.198294] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:54.710 [2024-02-13 07:16:28.200170] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:54.710 [2024-02-13 07:16:28.200241] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:54.710 [2024-02-13 07:16:28.200298] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:15:54.710 [2024-02-13 07:16:28.200391] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:15:54.710 [2024-02-13 07:16:28.200461] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:15:54.710 [2024-02-13 07:16:28.200511] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:54.710 [2024-02-13 07:16:28.200539] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:15:54.710 request: 00:15:54.710 { 00:15:54.710 "name": "raid_bdev1", 00:15:54.710 "raid_level": "raid0", 00:15:54.710 "base_bdevs": [ 00:15:54.710 "malloc1", 00:15:54.710 "malloc2", 00:15:54.710 "malloc3" 00:15:54.710 ], 00:15:54.710 "superblock": false, 00:15:54.710 "strip_size_kb": 64, 00:15:54.710 "method": "bdev_raid_create", 00:15:54.710 "req_id": 1 00:15:54.710 } 00:15:54.710 Got JSON-RPC error response 00:15:54.710 response: 00:15:54.710 { 00:15:54.710 "code": -17, 00:15:54.710 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:54.710 } 00:15:54.710 07:16:28 -- common/autotest_common.sh@641 -- # es=1 00:15:54.710 07:16:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:54.710 07:16:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:54.710 07:16:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:54.710 07:16:28 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.710 07:16:28 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:15:54.969 07:16:28 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:15:54.969 07:16:28 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:15:54.970 07:16:28 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:54.970 [2024-02-13 07:16:28.630289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:54.970 [2024-02-13 07:16:28.630398] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.970 [2024-02-13 07:16:28.630439] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:54.970 [2024-02-13 07:16:28.630458] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.970 [2024-02-13 07:16:28.632581] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.970 [2024-02-13 07:16:28.632645] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:54.970 [2024-02-13 07:16:28.632777] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:15:54.970 [2024-02-13 07:16:28.632853] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:54.970 pt1 00:15:54.970 07:16:28 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:54.970 07:16:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:54.970 07:16:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:54.970 07:16:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:54.970 07:16:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:54.970 07:16:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:54.970 07:16:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:54.970 07:16:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:54.970 07:16:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:54.970 07:16:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:54.970 07:16:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.970 07:16:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.229 07:16:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:55.229 "name": "raid_bdev1", 00:15:55.229 "uuid": "60be44ae-f747-4ceb-a96a-85d4b5481a8a", 00:15:55.229 "strip_size_kb": 64, 00:15:55.229 "state": "configuring", 00:15:55.229 "raid_level": "raid0", 00:15:55.229 "superblock": true, 00:15:55.229 "num_base_bdevs": 3, 00:15:55.229 "num_base_bdevs_discovered": 1, 00:15:55.229 "num_base_bdevs_operational": 3, 00:15:55.229 "base_bdevs_list": [ 00:15:55.229 { 00:15:55.229 "name": "pt1", 00:15:55.229 "uuid": "3b5dce50-b54f-5621-a289-709b1571ae53", 00:15:55.229 "is_configured": true, 00:15:55.229 "data_offset": 2048, 00:15:55.229 "data_size": 63488 00:15:55.229 }, 00:15:55.229 { 00:15:55.229 "name": null, 00:15:55.229 "uuid": "02221f35-cf2b-58e9-b6bc-101437b73325", 00:15:55.229 "is_configured": false, 00:15:55.229 "data_offset": 2048, 00:15:55.229 "data_size": 63488 00:15:55.229 }, 00:15:55.229 { 00:15:55.229 "name": null, 00:15:55.229 "uuid": "27a87623-d249-5770-a6b2-d757c879ee16", 00:15:55.229 "is_configured": false, 00:15:55.229 "data_offset": 2048, 00:15:55.229 "data_size": 63488 00:15:55.229 } 00:15:55.229 ] 00:15:55.229 }' 00:15:55.229 07:16:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:55.229 07:16:28 -- common/autotest_common.sh@10 -- # set +x 00:15:55.797 07:16:29 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:15:55.797 07:16:29 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:56.055 [2024-02-13 07:16:29.718517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:56.056 [2024-02-13 07:16:29.718636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.056 [2024-02-13 07:16:29.718681] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:56.056 [2024-02-13 07:16:29.718704] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.056 [2024-02-13 07:16:29.719242] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.056 [2024-02-13 07:16:29.719282] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:56.056 [2024-02-13 07:16:29.719397] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:56.056 [2024-02-13 07:16:29.719442] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:56.056 pt2 00:15:56.056 07:16:29 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:56.314 [2024-02-13 07:16:29.910556] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:56.314 07:16:29 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:15:56.314 07:16:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:56.314 07:16:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:15:56.314 07:16:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:56.314 07:16:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:56.314 07:16:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:56.314 07:16:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:56.314 07:16:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:56.314 07:16:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:56.314 07:16:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:56.314 07:16:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.314 07:16:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.573 07:16:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:56.573 "name": "raid_bdev1", 00:15:56.573 "uuid": "60be44ae-f747-4ceb-a96a-85d4b5481a8a", 00:15:56.573 "strip_size_kb": 64, 00:15:56.573 "state": "configuring", 00:15:56.573 "raid_level": "raid0", 00:15:56.573 "superblock": true, 00:15:56.573 "num_base_bdevs": 3, 00:15:56.573 "num_base_bdevs_discovered": 1, 00:15:56.573 "num_base_bdevs_operational": 3, 00:15:56.573 "base_bdevs_list": [ 00:15:56.573 { 00:15:56.573 "name": "pt1", 00:15:56.573 "uuid": "3b5dce50-b54f-5621-a289-709b1571ae53", 00:15:56.573 "is_configured": true, 00:15:56.573 "data_offset": 2048, 00:15:56.573 "data_size": 63488 00:15:56.573 }, 00:15:56.573 { 00:15:56.573 "name": null, 00:15:56.573 "uuid": "02221f35-cf2b-58e9-b6bc-101437b73325", 00:15:56.573 "is_configured": false, 00:15:56.573 "data_offset": 2048, 00:15:56.573 "data_size": 63488 00:15:56.573 }, 00:15:56.573 { 00:15:56.573 "name": null, 00:15:56.574 "uuid": "27a87623-d249-5770-a6b2-d757c879ee16", 00:15:56.574 "is_configured": false, 00:15:56.574 "data_offset": 2048, 00:15:56.574 "data_size": 63488 00:15:56.574 } 00:15:56.574 ] 00:15:56.574 }' 00:15:56.574 07:16:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:56.574 07:16:30 -- common/autotest_common.sh@10 -- # set +x 00:15:57.510 07:16:30 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:15:57.510 07:16:30 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:57.510 07:16:30 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:57.510 [2024-02-13 07:16:31.086802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:57.510 [2024-02-13 07:16:31.086908] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.510 [2024-02-13 07:16:31.086949] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:57.510 [2024-02-13 07:16:31.086988] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.511 [2024-02-13 07:16:31.087491] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.511 [2024-02-13 07:16:31.087533] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:57.511 [2024-02-13 07:16:31.087653] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:15:57.511 [2024-02-13 07:16:31.087681] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:57.511 pt2 00:15:57.511 07:16:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:57.511 07:16:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:57.511 07:16:31 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:57.769 [2024-02-13 07:16:31.330835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:57.769 [2024-02-13 07:16:31.330908] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.769 [2024-02-13 07:16:31.330939] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:57.769 [2024-02-13 07:16:31.330961] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.769 [2024-02-13 07:16:31.331305] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.769 [2024-02-13 07:16:31.331347] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:57.769 [2024-02-13 07:16:31.331440] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:15:57.769 [2024-02-13 07:16:31.331464] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:57.769 [2024-02-13 07:16:31.331570] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:15:57.769 [2024-02-13 07:16:31.331582] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:57.769 [2024-02-13 07:16:31.331698] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:15:57.769 [2024-02-13 07:16:31.332005] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:15:57.769 [2024-02-13 07:16:31.332043] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:15:57.769 [2024-02-13 07:16:31.332166] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.769 pt3 00:15:57.769 07:16:31 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:15:57.770 07:16:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:15:57.770 07:16:31 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:15:57.770 07:16:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:57.770 07:16:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:57.770 07:16:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:15:57.770 07:16:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:15:57.770 07:16:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:57.770 07:16:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:57.770 07:16:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:57.770 07:16:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:57.770 07:16:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:57.770 07:16:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:57.770 07:16:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.029 07:16:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:58.029 "name": "raid_bdev1", 00:15:58.029 "uuid": "60be44ae-f747-4ceb-a96a-85d4b5481a8a", 00:15:58.029 "strip_size_kb": 64, 00:15:58.029 "state": "online", 00:15:58.029 "raid_level": "raid0", 00:15:58.029 "superblock": true, 00:15:58.029 "num_base_bdevs": 3, 00:15:58.029 "num_base_bdevs_discovered": 3, 00:15:58.029 "num_base_bdevs_operational": 3, 00:15:58.029 "base_bdevs_list": [ 00:15:58.029 { 00:15:58.029 "name": "pt1", 00:15:58.029 "uuid": "3b5dce50-b54f-5621-a289-709b1571ae53", 00:15:58.029 "is_configured": true, 00:15:58.029 "data_offset": 2048, 00:15:58.029 "data_size": 63488 00:15:58.029 }, 00:15:58.029 { 00:15:58.029 "name": "pt2", 00:15:58.029 "uuid": "02221f35-cf2b-58e9-b6bc-101437b73325", 00:15:58.029 "is_configured": true, 00:15:58.029 "data_offset": 2048, 00:15:58.029 "data_size": 63488 00:15:58.029 }, 00:15:58.029 { 00:15:58.029 "name": "pt3", 00:15:58.029 "uuid": "27a87623-d249-5770-a6b2-d757c879ee16", 00:15:58.029 "is_configured": true, 00:15:58.029 "data_offset": 2048, 00:15:58.029 "data_size": 63488 00:15:58.029 } 00:15:58.029 ] 00:15:58.029 }' 00:15:58.029 07:16:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:58.029 07:16:31 -- common/autotest_common.sh@10 -- # set +x 00:15:58.597 07:16:32 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:58.597 07:16:32 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:15:58.855 [2024-02-13 07:16:32.471419] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.855 07:16:32 -- bdev/bdev_raid.sh@430 -- # '[' 60be44ae-f747-4ceb-a96a-85d4b5481a8a '!=' 60be44ae-f747-4ceb-a96a-85d4b5481a8a ']' 00:15:58.855 07:16:32 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:15:58.855 07:16:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:15:58.855 07:16:32 -- bdev/bdev_raid.sh@197 -- # return 1 00:15:58.855 07:16:32 -- bdev/bdev_raid.sh@511 -- # killprocess 120040 00:15:58.855 07:16:32 -- common/autotest_common.sh@924 -- # '[' -z 120040 ']' 00:15:58.855 07:16:32 -- common/autotest_common.sh@928 -- # kill -0 120040 00:15:58.855 07:16:32 -- common/autotest_common.sh@929 -- # uname 00:15:58.855 07:16:32 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:15:58.855 07:16:32 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 120040 00:15:58.855 07:16:32 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:15:58.855 07:16:32 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:15:58.855 killing process with pid 120040 00:15:58.855 07:16:32 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 120040' 00:15:58.855 07:16:32 -- common/autotest_common.sh@943 -- # kill 120040 00:15:58.855 07:16:32 -- common/autotest_common.sh@948 -- # wait 120040 00:15:58.855 [2024-02-13 07:16:32.509560] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:58.855 [2024-02-13 07:16:32.509655] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.855 [2024-02-13 07:16:32.509757] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.855 [2024-02-13 07:16:32.509768] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:15:59.114 [2024-02-13 07:16:32.720850] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:00.049 ************************************ 00:16:00.049 END TEST raid_superblock_test 00:16:00.049 ************************************ 00:16:00.049 07:16:33 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:00.049 00:16:00.049 real 0m10.583s 00:16:00.049 user 0m18.660s 00:16:00.049 sys 0m1.173s 00:16:00.049 07:16:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:00.049 07:16:33 -- common/autotest_common.sh@10 -- # set +x 00:16:00.049 07:16:33 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:00.049 07:16:33 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:16:00.049 07:16:33 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:16:00.049 07:16:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:00.049 07:16:33 -- common/autotest_common.sh@10 -- # set +x 00:16:00.308 ************************************ 00:16:00.308 START TEST raid_state_function_test 00:16:00.308 ************************************ 00:16:00.308 07:16:33 -- common/autotest_common.sh@1102 -- # raid_state_function_test concat 3 false 00:16:00.308 07:16:33 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:00.308 07:16:33 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:00.308 07:16:33 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:00.308 07:16:33 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:00.308 07:16:33 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:00.308 07:16:33 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:00.308 07:16:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:00.308 07:16:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:00.308 07:16:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:00.308 07:16:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:00.308 07:16:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:00.308 07:16:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:00.308 07:16:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:00.308 07:16:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:00.308 07:16:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:00.309 07:16:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:00.309 07:16:33 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:00.309 07:16:33 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:00.309 07:16:33 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:00.309 07:16:33 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:00.309 07:16:33 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:00.309 07:16:33 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:00.309 07:16:33 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:00.309 07:16:33 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:00.309 07:16:33 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:00.309 07:16:33 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:00.309 07:16:33 -- bdev/bdev_raid.sh@226 -- # raid_pid=120372 00:16:00.309 Process raid pid: 120372 00:16:00.309 07:16:33 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120372' 00:16:00.309 07:16:33 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120372 /var/tmp/spdk-raid.sock 00:16:00.309 07:16:33 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:00.309 07:16:33 -- common/autotest_common.sh@817 -- # '[' -z 120372 ']' 00:16:00.309 07:16:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:00.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:00.309 07:16:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:00.309 07:16:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:00.309 07:16:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:00.309 07:16:33 -- common/autotest_common.sh@10 -- # set +x 00:16:00.309 [2024-02-13 07:16:33.813313] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:16:00.309 [2024-02-13 07:16:33.813514] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:00.309 [2024-02-13 07:16:33.980787] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.567 [2024-02-13 07:16:34.155507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.826 [2024-02-13 07:16:34.336996] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.084 07:16:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:01.084 07:16:34 -- common/autotest_common.sh@850 -- # return 0 00:16:01.084 07:16:34 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:01.343 [2024-02-13 07:16:34.997239] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:01.343 [2024-02-13 07:16:34.997335] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:01.343 [2024-02-13 07:16:34.997349] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:01.343 [2024-02-13 07:16:34.997368] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:01.343 [2024-02-13 07:16:34.997375] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:01.343 [2024-02-13 07:16:34.997417] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:01.343 07:16:35 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:01.343 07:16:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:01.343 07:16:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:01.343 07:16:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:01.343 07:16:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:01.343 07:16:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:01.343 07:16:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:01.343 07:16:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:01.343 07:16:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:01.343 07:16:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:01.343 07:16:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:01.343 07:16:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.601 07:16:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:01.601 "name": "Existed_Raid", 00:16:01.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.601 "strip_size_kb": 64, 00:16:01.601 "state": "configuring", 00:16:01.601 "raid_level": "concat", 00:16:01.601 "superblock": false, 00:16:01.601 "num_base_bdevs": 3, 00:16:01.601 "num_base_bdevs_discovered": 0, 00:16:01.601 "num_base_bdevs_operational": 3, 00:16:01.601 "base_bdevs_list": [ 00:16:01.601 { 00:16:01.601 "name": "BaseBdev1", 00:16:01.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.601 "is_configured": false, 00:16:01.601 "data_offset": 0, 00:16:01.601 "data_size": 0 00:16:01.601 }, 00:16:01.601 { 00:16:01.601 "name": "BaseBdev2", 00:16:01.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.601 "is_configured": false, 00:16:01.601 "data_offset": 0, 00:16:01.601 "data_size": 0 00:16:01.601 }, 00:16:01.601 { 00:16:01.601 "name": "BaseBdev3", 00:16:01.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.601 "is_configured": false, 00:16:01.601 "data_offset": 0, 00:16:01.601 "data_size": 0 00:16:01.601 } 00:16:01.601 ] 00:16:01.601 }' 00:16:01.601 07:16:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:01.601 07:16:35 -- common/autotest_common.sh@10 -- # set +x 00:16:02.536 07:16:35 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:02.536 [2024-02-13 07:16:36.205362] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:02.536 [2024-02-13 07:16:36.205394] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:02.536 07:16:36 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:02.794 [2024-02-13 07:16:36.469424] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:02.794 [2024-02-13 07:16:36.469478] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:02.794 [2024-02-13 07:16:36.469505] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.794 [2024-02-13 07:16:36.469531] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.794 [2024-02-13 07:16:36.469539] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.794 [2024-02-13 07:16:36.469563] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.794 07:16:36 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:03.052 [2024-02-13 07:16:36.731380] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.052 BaseBdev1 00:16:03.310 07:16:36 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:03.310 07:16:36 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:03.310 07:16:36 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:03.310 07:16:36 -- common/autotest_common.sh@887 -- # local i 00:16:03.310 07:16:36 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:03.310 07:16:36 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:03.310 07:16:36 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:03.310 07:16:36 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:03.567 [ 00:16:03.567 { 00:16:03.568 "name": "BaseBdev1", 00:16:03.568 "aliases": [ 00:16:03.568 "55f02dae-4d11-48d2-bcc4-0ee87f84c572" 00:16:03.568 ], 00:16:03.568 "product_name": "Malloc disk", 00:16:03.568 "block_size": 512, 00:16:03.568 "num_blocks": 65536, 00:16:03.568 "uuid": "55f02dae-4d11-48d2-bcc4-0ee87f84c572", 00:16:03.568 "assigned_rate_limits": { 00:16:03.568 "rw_ios_per_sec": 0, 00:16:03.568 "rw_mbytes_per_sec": 0, 00:16:03.568 "r_mbytes_per_sec": 0, 00:16:03.568 "w_mbytes_per_sec": 0 00:16:03.568 }, 00:16:03.568 "claimed": true, 00:16:03.568 "claim_type": "exclusive_write", 00:16:03.568 "zoned": false, 00:16:03.568 "supported_io_types": { 00:16:03.568 "read": true, 00:16:03.568 "write": true, 00:16:03.568 "unmap": true, 00:16:03.568 "write_zeroes": true, 00:16:03.568 "flush": true, 00:16:03.568 "reset": true, 00:16:03.568 "compare": false, 00:16:03.568 "compare_and_write": false, 00:16:03.568 "abort": true, 00:16:03.568 "nvme_admin": false, 00:16:03.568 "nvme_io": false 00:16:03.568 }, 00:16:03.568 "memory_domains": [ 00:16:03.568 { 00:16:03.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.568 "dma_device_type": 2 00:16:03.568 } 00:16:03.568 ], 00:16:03.568 "driver_specific": {} 00:16:03.568 } 00:16:03.568 ] 00:16:03.568 07:16:37 -- common/autotest_common.sh@893 -- # return 0 00:16:03.568 07:16:37 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:03.568 07:16:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:03.568 07:16:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:03.568 07:16:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:03.568 07:16:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:03.568 07:16:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:03.568 07:16:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:03.568 07:16:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:03.568 07:16:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:03.568 07:16:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:03.568 07:16:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.568 07:16:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.826 07:16:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:03.826 "name": "Existed_Raid", 00:16:03.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.826 "strip_size_kb": 64, 00:16:03.826 "state": "configuring", 00:16:03.826 "raid_level": "concat", 00:16:03.826 "superblock": false, 00:16:03.826 "num_base_bdevs": 3, 00:16:03.826 "num_base_bdevs_discovered": 1, 00:16:03.826 "num_base_bdevs_operational": 3, 00:16:03.826 "base_bdevs_list": [ 00:16:03.826 { 00:16:03.826 "name": "BaseBdev1", 00:16:03.826 "uuid": "55f02dae-4d11-48d2-bcc4-0ee87f84c572", 00:16:03.826 "is_configured": true, 00:16:03.826 "data_offset": 0, 00:16:03.826 "data_size": 65536 00:16:03.826 }, 00:16:03.826 { 00:16:03.826 "name": "BaseBdev2", 00:16:03.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.826 "is_configured": false, 00:16:03.826 "data_offset": 0, 00:16:03.826 "data_size": 0 00:16:03.826 }, 00:16:03.826 { 00:16:03.826 "name": "BaseBdev3", 00:16:03.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.826 "is_configured": false, 00:16:03.826 "data_offset": 0, 00:16:03.826 "data_size": 0 00:16:03.826 } 00:16:03.826 ] 00:16:03.826 }' 00:16:03.826 07:16:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:03.826 07:16:37 -- common/autotest_common.sh@10 -- # set +x 00:16:04.392 07:16:38 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:04.657 [2024-02-13 07:16:38.243620] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:04.657 [2024-02-13 07:16:38.243666] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:04.657 07:16:38 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:04.657 07:16:38 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:04.947 [2024-02-13 07:16:38.495732] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.947 [2024-02-13 07:16:38.497660] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:04.947 [2024-02-13 07:16:38.497721] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:04.947 [2024-02-13 07:16:38.497781] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:04.947 [2024-02-13 07:16:38.497812] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:04.947 07:16:38 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:04.947 07:16:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:04.947 07:16:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:04.947 07:16:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:04.947 07:16:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:04.947 07:16:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:04.947 07:16:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:04.947 07:16:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:04.947 07:16:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:04.947 07:16:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:04.947 07:16:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:04.947 07:16:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:04.947 07:16:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.947 07:16:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.206 07:16:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:05.206 "name": "Existed_Raid", 00:16:05.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.206 "strip_size_kb": 64, 00:16:05.206 "state": "configuring", 00:16:05.206 "raid_level": "concat", 00:16:05.206 "superblock": false, 00:16:05.206 "num_base_bdevs": 3, 00:16:05.206 "num_base_bdevs_discovered": 1, 00:16:05.206 "num_base_bdevs_operational": 3, 00:16:05.206 "base_bdevs_list": [ 00:16:05.206 { 00:16:05.206 "name": "BaseBdev1", 00:16:05.206 "uuid": "55f02dae-4d11-48d2-bcc4-0ee87f84c572", 00:16:05.206 "is_configured": true, 00:16:05.206 "data_offset": 0, 00:16:05.206 "data_size": 65536 00:16:05.206 }, 00:16:05.206 { 00:16:05.206 "name": "BaseBdev2", 00:16:05.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.206 "is_configured": false, 00:16:05.206 "data_offset": 0, 00:16:05.206 "data_size": 0 00:16:05.206 }, 00:16:05.206 { 00:16:05.206 "name": "BaseBdev3", 00:16:05.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.206 "is_configured": false, 00:16:05.206 "data_offset": 0, 00:16:05.206 "data_size": 0 00:16:05.206 } 00:16:05.206 ] 00:16:05.206 }' 00:16:05.206 07:16:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:05.206 07:16:38 -- common/autotest_common.sh@10 -- # set +x 00:16:05.773 07:16:39 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:06.036 [2024-02-13 07:16:39.690960] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.036 BaseBdev2 00:16:06.036 07:16:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:06.036 07:16:39 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:16:06.036 07:16:39 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:06.036 07:16:39 -- common/autotest_common.sh@887 -- # local i 00:16:06.036 07:16:39 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:06.036 07:16:39 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:06.036 07:16:39 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:06.296 07:16:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:06.554 [ 00:16:06.554 { 00:16:06.554 "name": "BaseBdev2", 00:16:06.554 "aliases": [ 00:16:06.554 "9df20038-f039-4a50-bef2-3b0c26ab6f11" 00:16:06.554 ], 00:16:06.554 "product_name": "Malloc disk", 00:16:06.554 "block_size": 512, 00:16:06.554 "num_blocks": 65536, 00:16:06.554 "uuid": "9df20038-f039-4a50-bef2-3b0c26ab6f11", 00:16:06.554 "assigned_rate_limits": { 00:16:06.554 "rw_ios_per_sec": 0, 00:16:06.554 "rw_mbytes_per_sec": 0, 00:16:06.554 "r_mbytes_per_sec": 0, 00:16:06.554 "w_mbytes_per_sec": 0 00:16:06.554 }, 00:16:06.554 "claimed": true, 00:16:06.554 "claim_type": "exclusive_write", 00:16:06.554 "zoned": false, 00:16:06.554 "supported_io_types": { 00:16:06.554 "read": true, 00:16:06.554 "write": true, 00:16:06.554 "unmap": true, 00:16:06.554 "write_zeroes": true, 00:16:06.554 "flush": true, 00:16:06.554 "reset": true, 00:16:06.554 "compare": false, 00:16:06.554 "compare_and_write": false, 00:16:06.554 "abort": true, 00:16:06.554 "nvme_admin": false, 00:16:06.554 "nvme_io": false 00:16:06.554 }, 00:16:06.554 "memory_domains": [ 00:16:06.554 { 00:16:06.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.554 "dma_device_type": 2 00:16:06.554 } 00:16:06.554 ], 00:16:06.554 "driver_specific": {} 00:16:06.554 } 00:16:06.554 ] 00:16:06.554 07:16:40 -- common/autotest_common.sh@893 -- # return 0 00:16:06.554 07:16:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:06.554 07:16:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:06.554 07:16:40 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:06.554 07:16:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:06.554 07:16:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:06.554 07:16:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:06.554 07:16:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:06.554 07:16:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:06.554 07:16:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:06.554 07:16:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:06.554 07:16:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:06.554 07:16:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:06.554 07:16:40 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.554 07:16:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.811 07:16:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:06.812 "name": "Existed_Raid", 00:16:06.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.812 "strip_size_kb": 64, 00:16:06.812 "state": "configuring", 00:16:06.812 "raid_level": "concat", 00:16:06.812 "superblock": false, 00:16:06.812 "num_base_bdevs": 3, 00:16:06.812 "num_base_bdevs_discovered": 2, 00:16:06.812 "num_base_bdevs_operational": 3, 00:16:06.812 "base_bdevs_list": [ 00:16:06.812 { 00:16:06.812 "name": "BaseBdev1", 00:16:06.812 "uuid": "55f02dae-4d11-48d2-bcc4-0ee87f84c572", 00:16:06.812 "is_configured": true, 00:16:06.812 "data_offset": 0, 00:16:06.812 "data_size": 65536 00:16:06.812 }, 00:16:06.812 { 00:16:06.812 "name": "BaseBdev2", 00:16:06.812 "uuid": "9df20038-f039-4a50-bef2-3b0c26ab6f11", 00:16:06.812 "is_configured": true, 00:16:06.812 "data_offset": 0, 00:16:06.812 "data_size": 65536 00:16:06.812 }, 00:16:06.812 { 00:16:06.812 "name": "BaseBdev3", 00:16:06.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.812 "is_configured": false, 00:16:06.812 "data_offset": 0, 00:16:06.812 "data_size": 0 00:16:06.812 } 00:16:06.812 ] 00:16:06.812 }' 00:16:06.812 07:16:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:06.812 07:16:40 -- common/autotest_common.sh@10 -- # set +x 00:16:07.376 07:16:41 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:07.634 [2024-02-13 07:16:41.282974] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.634 [2024-02-13 07:16:41.283054] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:07.634 [2024-02-13 07:16:41.283064] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:07.634 [2024-02-13 07:16:41.283191] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:07.634 [2024-02-13 07:16:41.283614] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:07.634 [2024-02-13 07:16:41.283639] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:07.634 [2024-02-13 07:16:41.283908] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.634 BaseBdev3 00:16:07.634 07:16:41 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:07.634 07:16:41 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:16:07.634 07:16:41 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:07.634 07:16:41 -- common/autotest_common.sh@887 -- # local i 00:16:07.634 07:16:41 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:07.634 07:16:41 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:07.634 07:16:41 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:07.893 07:16:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:08.152 [ 00:16:08.152 { 00:16:08.152 "name": "BaseBdev3", 00:16:08.152 "aliases": [ 00:16:08.152 "91edcd49-a940-486a-9dd3-0f6a4ff5482d" 00:16:08.152 ], 00:16:08.152 "product_name": "Malloc disk", 00:16:08.152 "block_size": 512, 00:16:08.152 "num_blocks": 65536, 00:16:08.152 "uuid": "91edcd49-a940-486a-9dd3-0f6a4ff5482d", 00:16:08.152 "assigned_rate_limits": { 00:16:08.152 "rw_ios_per_sec": 0, 00:16:08.152 "rw_mbytes_per_sec": 0, 00:16:08.152 "r_mbytes_per_sec": 0, 00:16:08.152 "w_mbytes_per_sec": 0 00:16:08.152 }, 00:16:08.152 "claimed": true, 00:16:08.152 "claim_type": "exclusive_write", 00:16:08.152 "zoned": false, 00:16:08.152 "supported_io_types": { 00:16:08.152 "read": true, 00:16:08.152 "write": true, 00:16:08.152 "unmap": true, 00:16:08.152 "write_zeroes": true, 00:16:08.152 "flush": true, 00:16:08.152 "reset": true, 00:16:08.152 "compare": false, 00:16:08.152 "compare_and_write": false, 00:16:08.152 "abort": true, 00:16:08.152 "nvme_admin": false, 00:16:08.152 "nvme_io": false 00:16:08.152 }, 00:16:08.152 "memory_domains": [ 00:16:08.152 { 00:16:08.152 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.152 "dma_device_type": 2 00:16:08.152 } 00:16:08.152 ], 00:16:08.152 "driver_specific": {} 00:16:08.152 } 00:16:08.152 ] 00:16:08.152 07:16:41 -- common/autotest_common.sh@893 -- # return 0 00:16:08.152 07:16:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:08.152 07:16:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:08.152 07:16:41 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:08.152 07:16:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:08.152 07:16:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:08.152 07:16:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:08.152 07:16:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:08.152 07:16:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:08.152 07:16:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:08.152 07:16:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:08.152 07:16:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:08.152 07:16:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:08.152 07:16:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.152 07:16:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.410 07:16:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:08.410 "name": "Existed_Raid", 00:16:08.410 "uuid": "8e145378-1adc-47c7-a19a-78bec24d7805", 00:16:08.410 "strip_size_kb": 64, 00:16:08.411 "state": "online", 00:16:08.411 "raid_level": "concat", 00:16:08.411 "superblock": false, 00:16:08.411 "num_base_bdevs": 3, 00:16:08.411 "num_base_bdevs_discovered": 3, 00:16:08.411 "num_base_bdevs_operational": 3, 00:16:08.411 "base_bdevs_list": [ 00:16:08.411 { 00:16:08.411 "name": "BaseBdev1", 00:16:08.411 "uuid": "55f02dae-4d11-48d2-bcc4-0ee87f84c572", 00:16:08.411 "is_configured": true, 00:16:08.411 "data_offset": 0, 00:16:08.411 "data_size": 65536 00:16:08.411 }, 00:16:08.411 { 00:16:08.411 "name": "BaseBdev2", 00:16:08.411 "uuid": "9df20038-f039-4a50-bef2-3b0c26ab6f11", 00:16:08.411 "is_configured": true, 00:16:08.411 "data_offset": 0, 00:16:08.411 "data_size": 65536 00:16:08.411 }, 00:16:08.411 { 00:16:08.411 "name": "BaseBdev3", 00:16:08.411 "uuid": "91edcd49-a940-486a-9dd3-0f6a4ff5482d", 00:16:08.411 "is_configured": true, 00:16:08.411 "data_offset": 0, 00:16:08.411 "data_size": 65536 00:16:08.411 } 00:16:08.411 ] 00:16:08.411 }' 00:16:08.411 07:16:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:08.411 07:16:41 -- common/autotest_common.sh@10 -- # set +x 00:16:08.982 07:16:42 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:09.241 [2024-02-13 07:16:42.869106] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:09.241 [2024-02-13 07:16:42.869143] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.241 [2024-02-13 07:16:42.869212] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.500 07:16:42 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:09.500 07:16:42 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:09.500 07:16:42 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:09.500 07:16:42 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:09.500 07:16:42 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:09.500 07:16:42 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:09.500 07:16:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:09.500 07:16:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:09.500 07:16:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:09.500 07:16:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:09.500 07:16:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:09.500 07:16:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:09.500 07:16:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:09.500 07:16:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:09.500 07:16:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:09.500 07:16:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.500 07:16:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.759 07:16:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:09.759 "name": "Existed_Raid", 00:16:09.759 "uuid": "8e145378-1adc-47c7-a19a-78bec24d7805", 00:16:09.759 "strip_size_kb": 64, 00:16:09.759 "state": "offline", 00:16:09.759 "raid_level": "concat", 00:16:09.759 "superblock": false, 00:16:09.759 "num_base_bdevs": 3, 00:16:09.759 "num_base_bdevs_discovered": 2, 00:16:09.759 "num_base_bdevs_operational": 2, 00:16:09.759 "base_bdevs_list": [ 00:16:09.759 { 00:16:09.759 "name": null, 00:16:09.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.759 "is_configured": false, 00:16:09.759 "data_offset": 0, 00:16:09.759 "data_size": 65536 00:16:09.759 }, 00:16:09.759 { 00:16:09.759 "name": "BaseBdev2", 00:16:09.759 "uuid": "9df20038-f039-4a50-bef2-3b0c26ab6f11", 00:16:09.759 "is_configured": true, 00:16:09.759 "data_offset": 0, 00:16:09.759 "data_size": 65536 00:16:09.759 }, 00:16:09.759 { 00:16:09.759 "name": "BaseBdev3", 00:16:09.759 "uuid": "91edcd49-a940-486a-9dd3-0f6a4ff5482d", 00:16:09.759 "is_configured": true, 00:16:09.759 "data_offset": 0, 00:16:09.759 "data_size": 65536 00:16:09.759 } 00:16:09.759 ] 00:16:09.759 }' 00:16:09.759 07:16:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:09.759 07:16:43 -- common/autotest_common.sh@10 -- # set +x 00:16:10.325 07:16:43 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:10.325 07:16:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:10.325 07:16:43 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.325 07:16:43 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:10.584 07:16:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:10.584 07:16:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:10.584 07:16:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:10.842 [2024-02-13 07:16:44.451702] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:10.842 07:16:44 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:10.842 07:16:44 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:11.100 07:16:44 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.100 07:16:44 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:11.100 07:16:44 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:11.100 07:16:44 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:11.100 07:16:44 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:11.358 [2024-02-13 07:16:45.034605] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:11.358 [2024-02-13 07:16:45.034687] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:11.617 07:16:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:11.617 07:16:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:11.617 07:16:45 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.617 07:16:45 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:11.875 07:16:45 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:11.875 07:16:45 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:11.875 07:16:45 -- bdev/bdev_raid.sh@287 -- # killprocess 120372 00:16:11.875 07:16:45 -- common/autotest_common.sh@924 -- # '[' -z 120372 ']' 00:16:11.875 07:16:45 -- common/autotest_common.sh@928 -- # kill -0 120372 00:16:11.875 07:16:45 -- common/autotest_common.sh@929 -- # uname 00:16:11.875 07:16:45 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:11.875 07:16:45 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 120372 00:16:11.875 killing process with pid 120372 00:16:11.875 07:16:45 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:11.875 07:16:45 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:11.875 07:16:45 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 120372' 00:16:11.875 07:16:45 -- common/autotest_common.sh@943 -- # kill 120372 00:16:11.875 07:16:45 -- common/autotest_common.sh@948 -- # wait 120372 00:16:11.875 [2024-02-13 07:16:45.384272] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.875 [2024-02-13 07:16:45.384412] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:12.811 ************************************ 00:16:12.811 END TEST raid_state_function_test 00:16:12.811 ************************************ 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:12.811 00:16:12.811 real 0m12.624s 00:16:12.811 user 0m22.521s 00:16:12.811 sys 0m1.475s 00:16:12.811 07:16:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:12.811 07:16:46 -- common/autotest_common.sh@10 -- # set +x 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:16:12.811 07:16:46 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:16:12.811 07:16:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:12.811 07:16:46 -- common/autotest_common.sh@10 -- # set +x 00:16:12.811 ************************************ 00:16:12.811 START TEST raid_state_function_test_sb 00:16:12.811 ************************************ 00:16:12.811 07:16:46 -- common/autotest_common.sh@1102 -- # raid_state_function_test concat 3 true 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@226 -- # raid_pid=120778 00:16:12.811 Process raid pid: 120778 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 120778' 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:12.811 07:16:46 -- bdev/bdev_raid.sh@228 -- # waitforlisten 120778 /var/tmp/spdk-raid.sock 00:16:12.811 07:16:46 -- common/autotest_common.sh@817 -- # '[' -z 120778 ']' 00:16:12.811 07:16:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:12.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:12.811 07:16:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:12.811 07:16:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:12.811 07:16:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:12.811 07:16:46 -- common/autotest_common.sh@10 -- # set +x 00:16:12.811 [2024-02-13 07:16:46.499680] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:16:12.811 [2024-02-13 07:16:46.500656] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.070 [2024-02-13 07:16:46.672245] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.329 [2024-02-13 07:16:46.880713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.588 [2024-02-13 07:16:47.060455] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.847 07:16:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:13.847 07:16:47 -- common/autotest_common.sh@850 -- # return 0 00:16:13.847 07:16:47 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:14.107 [2024-02-13 07:16:47.620082] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:14.107 [2024-02-13 07:16:47.620234] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:14.107 [2024-02-13 07:16:47.620248] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:14.107 [2024-02-13 07:16:47.620268] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:14.107 [2024-02-13 07:16:47.620275] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:14.107 [2024-02-13 07:16:47.620320] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:14.107 07:16:47 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:14.107 07:16:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:14.107 07:16:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:14.107 07:16:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:14.107 07:16:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:14.107 07:16:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:14.107 07:16:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:14.107 07:16:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:14.107 07:16:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:14.107 07:16:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:14.107 07:16:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.107 07:16:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.366 07:16:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:14.366 "name": "Existed_Raid", 00:16:14.366 "uuid": "2ad61735-eb01-4054-bb3d-a5d257486f8b", 00:16:14.366 "strip_size_kb": 64, 00:16:14.366 "state": "configuring", 00:16:14.366 "raid_level": "concat", 00:16:14.366 "superblock": true, 00:16:14.366 "num_base_bdevs": 3, 00:16:14.366 "num_base_bdevs_discovered": 0, 00:16:14.366 "num_base_bdevs_operational": 3, 00:16:14.366 "base_bdevs_list": [ 00:16:14.366 { 00:16:14.366 "name": "BaseBdev1", 00:16:14.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.366 "is_configured": false, 00:16:14.366 "data_offset": 0, 00:16:14.366 "data_size": 0 00:16:14.366 }, 00:16:14.366 { 00:16:14.366 "name": "BaseBdev2", 00:16:14.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.366 "is_configured": false, 00:16:14.366 "data_offset": 0, 00:16:14.366 "data_size": 0 00:16:14.366 }, 00:16:14.366 { 00:16:14.366 "name": "BaseBdev3", 00:16:14.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.366 "is_configured": false, 00:16:14.366 "data_offset": 0, 00:16:14.366 "data_size": 0 00:16:14.366 } 00:16:14.366 ] 00:16:14.366 }' 00:16:14.366 07:16:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:14.366 07:16:47 -- common/autotest_common.sh@10 -- # set +x 00:16:14.933 07:16:48 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:15.191 [2024-02-13 07:16:48.812075] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:15.191 [2024-02-13 07:16:48.812117] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:15.191 07:16:48 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:15.450 [2024-02-13 07:16:49.048299] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:15.450 [2024-02-13 07:16:49.048433] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:15.450 [2024-02-13 07:16:49.048446] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:15.450 [2024-02-13 07:16:49.048476] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:15.450 [2024-02-13 07:16:49.048484] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:15.450 [2024-02-13 07:16:49.048509] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:15.450 07:16:49 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:15.708 [2024-02-13 07:16:49.345057] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.708 BaseBdev1 00:16:15.708 07:16:49 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:15.708 07:16:49 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:15.708 07:16:49 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:15.708 07:16:49 -- common/autotest_common.sh@887 -- # local i 00:16:15.708 07:16:49 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:15.708 07:16:49 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:15.708 07:16:49 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:15.967 07:16:49 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:16.225 [ 00:16:16.225 { 00:16:16.225 "name": "BaseBdev1", 00:16:16.225 "aliases": [ 00:16:16.225 "de5c69ce-5591-4ac5-b186-03b216759009" 00:16:16.225 ], 00:16:16.225 "product_name": "Malloc disk", 00:16:16.225 "block_size": 512, 00:16:16.225 "num_blocks": 65536, 00:16:16.225 "uuid": "de5c69ce-5591-4ac5-b186-03b216759009", 00:16:16.225 "assigned_rate_limits": { 00:16:16.225 "rw_ios_per_sec": 0, 00:16:16.225 "rw_mbytes_per_sec": 0, 00:16:16.225 "r_mbytes_per_sec": 0, 00:16:16.225 "w_mbytes_per_sec": 0 00:16:16.225 }, 00:16:16.225 "claimed": true, 00:16:16.225 "claim_type": "exclusive_write", 00:16:16.225 "zoned": false, 00:16:16.225 "supported_io_types": { 00:16:16.225 "read": true, 00:16:16.225 "write": true, 00:16:16.225 "unmap": true, 00:16:16.225 "write_zeroes": true, 00:16:16.225 "flush": true, 00:16:16.225 "reset": true, 00:16:16.225 "compare": false, 00:16:16.225 "compare_and_write": false, 00:16:16.225 "abort": true, 00:16:16.225 "nvme_admin": false, 00:16:16.225 "nvme_io": false 00:16:16.225 }, 00:16:16.225 "memory_domains": [ 00:16:16.225 { 00:16:16.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.225 "dma_device_type": 2 00:16:16.225 } 00:16:16.225 ], 00:16:16.225 "driver_specific": {} 00:16:16.225 } 00:16:16.225 ] 00:16:16.225 07:16:49 -- common/autotest_common.sh@893 -- # return 0 00:16:16.225 07:16:49 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:16.225 07:16:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:16.225 07:16:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:16.225 07:16:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:16.225 07:16:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:16.225 07:16:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:16.225 07:16:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:16.225 07:16:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:16.225 07:16:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:16.225 07:16:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:16.225 07:16:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.225 07:16:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.483 07:16:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:16.483 "name": "Existed_Raid", 00:16:16.483 "uuid": "a6ed86bd-6c12-4c5a-a6f3-043e9f6f2b6c", 00:16:16.483 "strip_size_kb": 64, 00:16:16.483 "state": "configuring", 00:16:16.483 "raid_level": "concat", 00:16:16.483 "superblock": true, 00:16:16.483 "num_base_bdevs": 3, 00:16:16.483 "num_base_bdevs_discovered": 1, 00:16:16.483 "num_base_bdevs_operational": 3, 00:16:16.483 "base_bdevs_list": [ 00:16:16.483 { 00:16:16.483 "name": "BaseBdev1", 00:16:16.483 "uuid": "de5c69ce-5591-4ac5-b186-03b216759009", 00:16:16.483 "is_configured": true, 00:16:16.483 "data_offset": 2048, 00:16:16.483 "data_size": 63488 00:16:16.483 }, 00:16:16.483 { 00:16:16.483 "name": "BaseBdev2", 00:16:16.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.483 "is_configured": false, 00:16:16.483 "data_offset": 0, 00:16:16.483 "data_size": 0 00:16:16.483 }, 00:16:16.483 { 00:16:16.483 "name": "BaseBdev3", 00:16:16.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.483 "is_configured": false, 00:16:16.483 "data_offset": 0, 00:16:16.483 "data_size": 0 00:16:16.483 } 00:16:16.483 ] 00:16:16.483 }' 00:16:16.483 07:16:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:16.483 07:16:49 -- common/autotest_common.sh@10 -- # set +x 00:16:17.050 07:16:50 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:17.308 [2024-02-13 07:16:50.957515] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:17.308 [2024-02-13 07:16:50.957600] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:17.308 07:16:50 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:17.308 07:16:50 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:17.875 07:16:51 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:18.134 BaseBdev1 00:16:18.134 07:16:51 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:18.134 07:16:51 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:18.134 07:16:51 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:18.134 07:16:51 -- common/autotest_common.sh@887 -- # local i 00:16:18.134 07:16:51 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:18.134 07:16:51 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:18.134 07:16:51 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:18.134 07:16:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:18.393 [ 00:16:18.393 { 00:16:18.393 "name": "BaseBdev1", 00:16:18.393 "aliases": [ 00:16:18.393 "111b0e57-5655-442b-8e46-41de96a2d7b8" 00:16:18.393 ], 00:16:18.393 "product_name": "Malloc disk", 00:16:18.393 "block_size": 512, 00:16:18.393 "num_blocks": 65536, 00:16:18.393 "uuid": "111b0e57-5655-442b-8e46-41de96a2d7b8", 00:16:18.393 "assigned_rate_limits": { 00:16:18.393 "rw_ios_per_sec": 0, 00:16:18.393 "rw_mbytes_per_sec": 0, 00:16:18.393 "r_mbytes_per_sec": 0, 00:16:18.393 "w_mbytes_per_sec": 0 00:16:18.393 }, 00:16:18.393 "claimed": false, 00:16:18.393 "zoned": false, 00:16:18.393 "supported_io_types": { 00:16:18.393 "read": true, 00:16:18.393 "write": true, 00:16:18.393 "unmap": true, 00:16:18.393 "write_zeroes": true, 00:16:18.393 "flush": true, 00:16:18.393 "reset": true, 00:16:18.393 "compare": false, 00:16:18.393 "compare_and_write": false, 00:16:18.393 "abort": true, 00:16:18.393 "nvme_admin": false, 00:16:18.393 "nvme_io": false 00:16:18.393 }, 00:16:18.393 "memory_domains": [ 00:16:18.393 { 00:16:18.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.393 "dma_device_type": 2 00:16:18.393 } 00:16:18.393 ], 00:16:18.393 "driver_specific": {} 00:16:18.393 } 00:16:18.393 ] 00:16:18.393 07:16:52 -- common/autotest_common.sh@893 -- # return 0 00:16:18.393 07:16:52 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:18.652 [2024-02-13 07:16:52.236019] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.652 [2024-02-13 07:16:52.237859] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:18.652 [2024-02-13 07:16:52.237929] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:18.652 [2024-02-13 07:16:52.237942] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:18.652 [2024-02-13 07:16:52.237967] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:18.652 07:16:52 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:18.652 07:16:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:18.652 07:16:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:18.652 07:16:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:18.652 07:16:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:18.652 07:16:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:18.652 07:16:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:18.652 07:16:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:18.652 07:16:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:18.652 07:16:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:18.652 07:16:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:18.652 07:16:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:18.652 07:16:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.652 07:16:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.910 07:16:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:18.910 "name": "Existed_Raid", 00:16:18.910 "uuid": "40957bc0-0a9a-4b1c-9462-6739f957c93a", 00:16:18.910 "strip_size_kb": 64, 00:16:18.910 "state": "configuring", 00:16:18.910 "raid_level": "concat", 00:16:18.910 "superblock": true, 00:16:18.910 "num_base_bdevs": 3, 00:16:18.910 "num_base_bdevs_discovered": 1, 00:16:18.910 "num_base_bdevs_operational": 3, 00:16:18.910 "base_bdevs_list": [ 00:16:18.910 { 00:16:18.910 "name": "BaseBdev1", 00:16:18.910 "uuid": "111b0e57-5655-442b-8e46-41de96a2d7b8", 00:16:18.910 "is_configured": true, 00:16:18.910 "data_offset": 2048, 00:16:18.910 "data_size": 63488 00:16:18.910 }, 00:16:18.910 { 00:16:18.910 "name": "BaseBdev2", 00:16:18.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.910 "is_configured": false, 00:16:18.910 "data_offset": 0, 00:16:18.910 "data_size": 0 00:16:18.910 }, 00:16:18.910 { 00:16:18.910 "name": "BaseBdev3", 00:16:18.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.910 "is_configured": false, 00:16:18.910 "data_offset": 0, 00:16:18.910 "data_size": 0 00:16:18.910 } 00:16:18.910 ] 00:16:18.910 }' 00:16:18.910 07:16:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:18.910 07:16:52 -- common/autotest_common.sh@10 -- # set +x 00:16:19.478 07:16:53 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:19.744 [2024-02-13 07:16:53.361932] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.744 BaseBdev2 00:16:19.744 07:16:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:19.744 07:16:53 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:16:19.744 07:16:53 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:19.744 07:16:53 -- common/autotest_common.sh@887 -- # local i 00:16:19.744 07:16:53 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:19.744 07:16:53 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:19.744 07:16:53 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:20.014 07:16:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:20.272 [ 00:16:20.272 { 00:16:20.272 "name": "BaseBdev2", 00:16:20.272 "aliases": [ 00:16:20.272 "3f772443-d162-48ca-8807-fc3ca78a0f10" 00:16:20.273 ], 00:16:20.273 "product_name": "Malloc disk", 00:16:20.273 "block_size": 512, 00:16:20.273 "num_blocks": 65536, 00:16:20.273 "uuid": "3f772443-d162-48ca-8807-fc3ca78a0f10", 00:16:20.273 "assigned_rate_limits": { 00:16:20.273 "rw_ios_per_sec": 0, 00:16:20.273 "rw_mbytes_per_sec": 0, 00:16:20.273 "r_mbytes_per_sec": 0, 00:16:20.273 "w_mbytes_per_sec": 0 00:16:20.273 }, 00:16:20.273 "claimed": true, 00:16:20.273 "claim_type": "exclusive_write", 00:16:20.273 "zoned": false, 00:16:20.273 "supported_io_types": { 00:16:20.273 "read": true, 00:16:20.273 "write": true, 00:16:20.273 "unmap": true, 00:16:20.273 "write_zeroes": true, 00:16:20.273 "flush": true, 00:16:20.273 "reset": true, 00:16:20.273 "compare": false, 00:16:20.273 "compare_and_write": false, 00:16:20.273 "abort": true, 00:16:20.273 "nvme_admin": false, 00:16:20.273 "nvme_io": false 00:16:20.273 }, 00:16:20.273 "memory_domains": [ 00:16:20.273 { 00:16:20.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.273 "dma_device_type": 2 00:16:20.273 } 00:16:20.273 ], 00:16:20.273 "driver_specific": {} 00:16:20.273 } 00:16:20.273 ] 00:16:20.273 07:16:53 -- common/autotest_common.sh@893 -- # return 0 00:16:20.273 07:16:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:20.273 07:16:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:20.273 07:16:53 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:16:20.273 07:16:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:20.273 07:16:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:20.273 07:16:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:20.273 07:16:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:20.273 07:16:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:20.273 07:16:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:20.273 07:16:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:20.273 07:16:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:20.273 07:16:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:20.273 07:16:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.273 07:16:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.531 07:16:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:20.531 "name": "Existed_Raid", 00:16:20.531 "uuid": "40957bc0-0a9a-4b1c-9462-6739f957c93a", 00:16:20.531 "strip_size_kb": 64, 00:16:20.531 "state": "configuring", 00:16:20.531 "raid_level": "concat", 00:16:20.531 "superblock": true, 00:16:20.531 "num_base_bdevs": 3, 00:16:20.531 "num_base_bdevs_discovered": 2, 00:16:20.531 "num_base_bdevs_operational": 3, 00:16:20.531 "base_bdevs_list": [ 00:16:20.531 { 00:16:20.531 "name": "BaseBdev1", 00:16:20.531 "uuid": "111b0e57-5655-442b-8e46-41de96a2d7b8", 00:16:20.531 "is_configured": true, 00:16:20.531 "data_offset": 2048, 00:16:20.531 "data_size": 63488 00:16:20.531 }, 00:16:20.531 { 00:16:20.531 "name": "BaseBdev2", 00:16:20.531 "uuid": "3f772443-d162-48ca-8807-fc3ca78a0f10", 00:16:20.531 "is_configured": true, 00:16:20.531 "data_offset": 2048, 00:16:20.531 "data_size": 63488 00:16:20.531 }, 00:16:20.531 { 00:16:20.531 "name": "BaseBdev3", 00:16:20.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.531 "is_configured": false, 00:16:20.531 "data_offset": 0, 00:16:20.531 "data_size": 0 00:16:20.531 } 00:16:20.531 ] 00:16:20.531 }' 00:16:20.531 07:16:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:20.531 07:16:54 -- common/autotest_common.sh@10 -- # set +x 00:16:21.098 07:16:54 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:21.357 [2024-02-13 07:16:54.976123] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:21.357 [2024-02-13 07:16:54.976413] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:21.357 [2024-02-13 07:16:54.976428] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:21.357 [2024-02-13 07:16:54.976606] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:21.357 BaseBdev3 00:16:21.357 [2024-02-13 07:16:54.976963] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:21.357 [2024-02-13 07:16:54.976979] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:16:21.357 [2024-02-13 07:16:54.977159] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.357 07:16:54 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:21.357 07:16:54 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:16:21.357 07:16:54 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:21.357 07:16:54 -- common/autotest_common.sh@887 -- # local i 00:16:21.357 07:16:54 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:21.357 07:16:54 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:21.357 07:16:54 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:21.615 07:16:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:21.874 [ 00:16:21.874 { 00:16:21.874 "name": "BaseBdev3", 00:16:21.874 "aliases": [ 00:16:21.874 "c1f03269-1269-4fe1-a9f5-90061328a212" 00:16:21.874 ], 00:16:21.874 "product_name": "Malloc disk", 00:16:21.874 "block_size": 512, 00:16:21.874 "num_blocks": 65536, 00:16:21.874 "uuid": "c1f03269-1269-4fe1-a9f5-90061328a212", 00:16:21.874 "assigned_rate_limits": { 00:16:21.874 "rw_ios_per_sec": 0, 00:16:21.874 "rw_mbytes_per_sec": 0, 00:16:21.874 "r_mbytes_per_sec": 0, 00:16:21.874 "w_mbytes_per_sec": 0 00:16:21.874 }, 00:16:21.874 "claimed": true, 00:16:21.874 "claim_type": "exclusive_write", 00:16:21.874 "zoned": false, 00:16:21.874 "supported_io_types": { 00:16:21.874 "read": true, 00:16:21.874 "write": true, 00:16:21.874 "unmap": true, 00:16:21.874 "write_zeroes": true, 00:16:21.874 "flush": true, 00:16:21.874 "reset": true, 00:16:21.874 "compare": false, 00:16:21.874 "compare_and_write": false, 00:16:21.874 "abort": true, 00:16:21.874 "nvme_admin": false, 00:16:21.874 "nvme_io": false 00:16:21.874 }, 00:16:21.874 "memory_domains": [ 00:16:21.874 { 00:16:21.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.874 "dma_device_type": 2 00:16:21.874 } 00:16:21.874 ], 00:16:21.874 "driver_specific": {} 00:16:21.874 } 00:16:21.874 ] 00:16:21.874 07:16:55 -- common/autotest_common.sh@893 -- # return 0 00:16:21.874 07:16:55 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:21.874 07:16:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:21.874 07:16:55 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:16:21.874 07:16:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:21.874 07:16:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:21.874 07:16:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:21.874 07:16:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:21.874 07:16:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:21.874 07:16:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:21.875 07:16:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:21.875 07:16:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:21.875 07:16:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:21.875 07:16:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.875 07:16:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.133 07:16:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:22.133 "name": "Existed_Raid", 00:16:22.133 "uuid": "40957bc0-0a9a-4b1c-9462-6739f957c93a", 00:16:22.133 "strip_size_kb": 64, 00:16:22.133 "state": "online", 00:16:22.133 "raid_level": "concat", 00:16:22.133 "superblock": true, 00:16:22.133 "num_base_bdevs": 3, 00:16:22.133 "num_base_bdevs_discovered": 3, 00:16:22.133 "num_base_bdevs_operational": 3, 00:16:22.133 "base_bdevs_list": [ 00:16:22.133 { 00:16:22.133 "name": "BaseBdev1", 00:16:22.133 "uuid": "111b0e57-5655-442b-8e46-41de96a2d7b8", 00:16:22.133 "is_configured": true, 00:16:22.133 "data_offset": 2048, 00:16:22.133 "data_size": 63488 00:16:22.133 }, 00:16:22.133 { 00:16:22.133 "name": "BaseBdev2", 00:16:22.133 "uuid": "3f772443-d162-48ca-8807-fc3ca78a0f10", 00:16:22.133 "is_configured": true, 00:16:22.133 "data_offset": 2048, 00:16:22.133 "data_size": 63488 00:16:22.133 }, 00:16:22.133 { 00:16:22.133 "name": "BaseBdev3", 00:16:22.133 "uuid": "c1f03269-1269-4fe1-a9f5-90061328a212", 00:16:22.133 "is_configured": true, 00:16:22.133 "data_offset": 2048, 00:16:22.133 "data_size": 63488 00:16:22.133 } 00:16:22.133 ] 00:16:22.133 }' 00:16:22.134 07:16:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:22.134 07:16:55 -- common/autotest_common.sh@10 -- # set +x 00:16:22.700 07:16:56 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:22.959 [2024-02-13 07:16:56.624509] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:22.959 [2024-02-13 07:16:56.624544] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:22.959 [2024-02-13 07:16:56.624614] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.218 07:16:56 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:23.218 07:16:56 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:16:23.218 07:16:56 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:23.218 07:16:56 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:23.218 07:16:56 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:16:23.218 07:16:56 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:16:23.218 07:16:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:23.218 07:16:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:16:23.218 07:16:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:23.218 07:16:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:23.218 07:16:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:23.218 07:16:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:23.218 07:16:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:23.218 07:16:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:23.218 07:16:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:23.218 07:16:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.218 07:16:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:23.477 07:16:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:23.477 "name": "Existed_Raid", 00:16:23.477 "uuid": "40957bc0-0a9a-4b1c-9462-6739f957c93a", 00:16:23.477 "strip_size_kb": 64, 00:16:23.477 "state": "offline", 00:16:23.477 "raid_level": "concat", 00:16:23.477 "superblock": true, 00:16:23.477 "num_base_bdevs": 3, 00:16:23.477 "num_base_bdevs_discovered": 2, 00:16:23.477 "num_base_bdevs_operational": 2, 00:16:23.477 "base_bdevs_list": [ 00:16:23.477 { 00:16:23.477 "name": null, 00:16:23.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.477 "is_configured": false, 00:16:23.477 "data_offset": 2048, 00:16:23.477 "data_size": 63488 00:16:23.477 }, 00:16:23.477 { 00:16:23.477 "name": "BaseBdev2", 00:16:23.477 "uuid": "3f772443-d162-48ca-8807-fc3ca78a0f10", 00:16:23.477 "is_configured": true, 00:16:23.477 "data_offset": 2048, 00:16:23.477 "data_size": 63488 00:16:23.477 }, 00:16:23.477 { 00:16:23.477 "name": "BaseBdev3", 00:16:23.477 "uuid": "c1f03269-1269-4fe1-a9f5-90061328a212", 00:16:23.477 "is_configured": true, 00:16:23.477 "data_offset": 2048, 00:16:23.477 "data_size": 63488 00:16:23.477 } 00:16:23.477 ] 00:16:23.477 }' 00:16:23.477 07:16:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:23.477 07:16:56 -- common/autotest_common.sh@10 -- # set +x 00:16:24.045 07:16:57 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:24.045 07:16:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:24.045 07:16:57 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.045 07:16:57 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:24.304 07:16:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:24.304 07:16:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:24.304 07:16:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:24.562 [2024-02-13 07:16:58.089466] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:24.562 07:16:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:24.562 07:16:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:24.563 07:16:58 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:24.563 07:16:58 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:24.822 07:16:58 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:24.822 07:16:58 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:24.822 07:16:58 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:25.081 [2024-02-13 07:16:58.651345] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:25.081 [2024-02-13 07:16:58.651412] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:16:25.081 07:16:58 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:25.081 07:16:58 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:25.081 07:16:58 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:25.081 07:16:58 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:25.339 07:16:58 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:25.339 07:16:58 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:25.339 07:16:58 -- bdev/bdev_raid.sh@287 -- # killprocess 120778 00:16:25.339 07:16:58 -- common/autotest_common.sh@924 -- # '[' -z 120778 ']' 00:16:25.339 07:16:58 -- common/autotest_common.sh@928 -- # kill -0 120778 00:16:25.339 07:16:58 -- common/autotest_common.sh@929 -- # uname 00:16:25.339 07:16:58 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:25.340 07:16:58 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 120778 00:16:25.340 killing process with pid 120778 00:16:25.340 07:16:58 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:25.340 07:16:58 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:25.340 07:16:58 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 120778' 00:16:25.340 07:16:58 -- common/autotest_common.sh@943 -- # kill 120778 00:16:25.340 07:16:58 -- common/autotest_common.sh@948 -- # wait 120778 00:16:25.340 [2024-02-13 07:16:58.965332] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:25.340 [2024-02-13 07:16:58.965524] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:26.717 ************************************ 00:16:26.717 END TEST raid_state_function_test_sb 00:16:26.717 ************************************ 00:16:26.717 07:16:59 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:26.717 00:16:26.717 real 0m13.561s 00:16:26.717 user 0m24.068s 00:16:26.717 sys 0m1.633s 00:16:26.717 07:16:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:26.717 07:16:59 -- common/autotest_common.sh@10 -- # set +x 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:16:26.717 07:17:00 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:16:26.717 07:17:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:26.717 07:17:00 -- common/autotest_common.sh@10 -- # set +x 00:16:26.717 ************************************ 00:16:26.717 START TEST raid_superblock_test 00:16:26.717 ************************************ 00:16:26.717 07:17:00 -- common/autotest_common.sh@1102 -- # raid_superblock_test concat 3 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@357 -- # raid_pid=121187 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:26.717 07:17:00 -- bdev/bdev_raid.sh@358 -- # waitforlisten 121187 /var/tmp/spdk-raid.sock 00:16:26.717 07:17:00 -- common/autotest_common.sh@817 -- # '[' -z 121187 ']' 00:16:26.717 07:17:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:26.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:26.717 07:17:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:26.717 07:17:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:26.717 07:17:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:26.717 07:17:00 -- common/autotest_common.sh@10 -- # set +x 00:16:26.717 [2024-02-13 07:17:00.102868] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:16:26.717 [2024-02-13 07:17:00.103026] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid121187 ] 00:16:26.717 [2024-02-13 07:17:00.262208] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.976 [2024-02-13 07:17:00.504896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.233 [2024-02-13 07:17:00.687164] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:27.497 07:17:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:27.497 07:17:01 -- common/autotest_common.sh@850 -- # return 0 00:16:27.497 07:17:01 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:16:27.497 07:17:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:27.497 07:17:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:16:27.497 07:17:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:16:27.497 07:17:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:27.497 07:17:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:27.497 07:17:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:27.497 07:17:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:27.497 07:17:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:27.762 malloc1 00:16:27.762 07:17:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:28.021 [2024-02-13 07:17:01.584910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:28.021 [2024-02-13 07:17:01.584995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.021 [2024-02-13 07:17:01.585034] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:16:28.021 [2024-02-13 07:17:01.585115] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.021 [2024-02-13 07:17:01.587688] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.021 [2024-02-13 07:17:01.587749] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:28.021 pt1 00:16:28.021 07:17:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:28.021 07:17:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:28.021 07:17:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:16:28.021 07:17:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:16:28.021 07:17:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:28.021 07:17:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.021 07:17:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.021 07:17:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.021 07:17:01 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:28.280 malloc2 00:16:28.280 07:17:01 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:28.538 [2024-02-13 07:17:02.036360] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:28.538 [2024-02-13 07:17:02.036454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.538 [2024-02-13 07:17:02.036496] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:16:28.538 [2024-02-13 07:17:02.036550] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.538 [2024-02-13 07:17:02.038617] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.538 [2024-02-13 07:17:02.038677] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:28.538 pt2 00:16:28.538 07:17:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:28.538 07:17:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:28.538 07:17:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:16:28.538 07:17:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:16:28.538 07:17:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:28.538 07:17:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:28.538 07:17:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:16:28.538 07:17:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:28.538 07:17:02 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:28.796 malloc3 00:16:28.797 07:17:02 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:28.797 [2024-02-13 07:17:02.468841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:28.797 [2024-02-13 07:17:02.468930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.797 [2024-02-13 07:17:02.468966] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:16:28.797 [2024-02-13 07:17:02.469007] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.797 [2024-02-13 07:17:02.471156] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.797 [2024-02-13 07:17:02.471220] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:28.797 pt3 00:16:28.797 07:17:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:16:28.797 07:17:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:16:28.797 07:17:02 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:16:29.054 [2024-02-13 07:17:02.672890] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:29.054 [2024-02-13 07:17:02.674944] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:29.054 [2024-02-13 07:17:02.675034] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:29.054 [2024-02-13 07:17:02.675247] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:16:29.054 [2024-02-13 07:17:02.675276] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:29.054 [2024-02-13 07:17:02.675428] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:16:29.054 [2024-02-13 07:17:02.675809] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:16:29.054 [2024-02-13 07:17:02.675833] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:16:29.055 [2024-02-13 07:17:02.675989] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.055 07:17:02 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:29.055 07:17:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:29.055 07:17:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:29.055 07:17:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:29.055 07:17:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:29.055 07:17:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:29.055 07:17:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:29.055 07:17:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:29.055 07:17:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:29.055 07:17:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:29.055 07:17:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.055 07:17:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.313 07:17:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:29.313 "name": "raid_bdev1", 00:16:29.313 "uuid": "d5575945-2c33-4a26-ac22-b7d43af81ec4", 00:16:29.313 "strip_size_kb": 64, 00:16:29.313 "state": "online", 00:16:29.313 "raid_level": "concat", 00:16:29.313 "superblock": true, 00:16:29.313 "num_base_bdevs": 3, 00:16:29.313 "num_base_bdevs_discovered": 3, 00:16:29.313 "num_base_bdevs_operational": 3, 00:16:29.313 "base_bdevs_list": [ 00:16:29.313 { 00:16:29.313 "name": "pt1", 00:16:29.313 "uuid": "813375e2-79e2-5be6-ab8c-054775c67674", 00:16:29.313 "is_configured": true, 00:16:29.313 "data_offset": 2048, 00:16:29.313 "data_size": 63488 00:16:29.313 }, 00:16:29.313 { 00:16:29.313 "name": "pt2", 00:16:29.313 "uuid": "22dabe4f-8891-50da-8346-40a159006a78", 00:16:29.313 "is_configured": true, 00:16:29.313 "data_offset": 2048, 00:16:29.313 "data_size": 63488 00:16:29.313 }, 00:16:29.313 { 00:16:29.313 "name": "pt3", 00:16:29.313 "uuid": "45e68091-c295-5b67-81aa-67aae6022625", 00:16:29.313 "is_configured": true, 00:16:29.313 "data_offset": 2048, 00:16:29.313 "data_size": 63488 00:16:29.313 } 00:16:29.313 ] 00:16:29.313 }' 00:16:29.313 07:17:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:29.313 07:17:02 -- common/autotest_common.sh@10 -- # set +x 00:16:29.880 07:17:03 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:29.880 07:17:03 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:16:30.138 [2024-02-13 07:17:03.821302] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.397 07:17:03 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d5575945-2c33-4a26-ac22-b7d43af81ec4 00:16:30.397 07:17:03 -- bdev/bdev_raid.sh@380 -- # '[' -z d5575945-2c33-4a26-ac22-b7d43af81ec4 ']' 00:16:30.397 07:17:03 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:30.397 [2024-02-13 07:17:04.057129] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:30.397 [2024-02-13 07:17:04.057157] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.397 [2024-02-13 07:17:04.057262] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.397 [2024-02-13 07:17:04.057342] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.397 [2024-02-13 07:17:04.057354] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:16:30.397 07:17:04 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.397 07:17:04 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:16:30.655 07:17:04 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:16:30.656 07:17:04 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:16:30.656 07:17:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:30.656 07:17:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:30.914 07:17:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:30.914 07:17:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:31.173 07:17:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:16:31.173 07:17:04 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:31.173 07:17:04 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:31.173 07:17:04 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:31.432 07:17:05 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:16:31.432 07:17:05 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:31.432 07:17:05 -- common/autotest_common.sh@638 -- # local es=0 00:16:31.432 07:17:05 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:31.432 07:17:05 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.432 07:17:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:31.432 07:17:05 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.432 07:17:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:31.432 07:17:05 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.432 07:17:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:31.432 07:17:05 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:31.432 07:17:05 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:31.432 07:17:05 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:16:31.691 [2024-02-13 07:17:05.245409] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:31.691 [2024-02-13 07:17:05.247160] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:31.691 [2024-02-13 07:17:05.247234] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:31.691 [2024-02-13 07:17:05.247293] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:16:31.691 [2024-02-13 07:17:05.247370] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:16:31.691 [2024-02-13 07:17:05.247420] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:16:31.692 [2024-02-13 07:17:05.247478] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.692 [2024-02-13 07:17:05.247490] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:16:31.692 request: 00:16:31.692 { 00:16:31.692 "name": "raid_bdev1", 00:16:31.692 "raid_level": "concat", 00:16:31.692 "base_bdevs": [ 00:16:31.692 "malloc1", 00:16:31.692 "malloc2", 00:16:31.692 "malloc3" 00:16:31.692 ], 00:16:31.692 "superblock": false, 00:16:31.692 "strip_size_kb": 64, 00:16:31.692 "method": "bdev_raid_create", 00:16:31.692 "req_id": 1 00:16:31.692 } 00:16:31.692 Got JSON-RPC error response 00:16:31.692 response: 00:16:31.692 { 00:16:31.692 "code": -17, 00:16:31.692 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:31.692 } 00:16:31.692 07:17:05 -- common/autotest_common.sh@641 -- # es=1 00:16:31.692 07:17:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:31.692 07:17:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:31.692 07:17:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:31.692 07:17:05 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.692 07:17:05 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:16:31.951 07:17:05 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:16:31.951 07:17:05 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:16:31.951 07:17:05 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:32.210 [2024-02-13 07:17:05.649433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:32.210 [2024-02-13 07:17:05.649510] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.210 [2024-02-13 07:17:05.649547] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:32.210 [2024-02-13 07:17:05.649568] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.210 [2024-02-13 07:17:05.651673] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.210 [2024-02-13 07:17:05.651719] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:32.210 [2024-02-13 07:17:05.651839] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:16:32.210 [2024-02-13 07:17:05.651897] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:32.210 pt1 00:16:32.210 07:17:05 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:32.210 07:17:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:32.210 07:17:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:32.210 07:17:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:32.210 07:17:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:32.210 07:17:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:32.210 07:17:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:32.210 07:17:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:32.210 07:17:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:32.210 07:17:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:32.210 07:17:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.210 07:17:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.469 07:17:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:32.469 "name": "raid_bdev1", 00:16:32.469 "uuid": "d5575945-2c33-4a26-ac22-b7d43af81ec4", 00:16:32.469 "strip_size_kb": 64, 00:16:32.469 "state": "configuring", 00:16:32.469 "raid_level": "concat", 00:16:32.469 "superblock": true, 00:16:32.469 "num_base_bdevs": 3, 00:16:32.469 "num_base_bdevs_discovered": 1, 00:16:32.469 "num_base_bdevs_operational": 3, 00:16:32.469 "base_bdevs_list": [ 00:16:32.469 { 00:16:32.469 "name": "pt1", 00:16:32.469 "uuid": "813375e2-79e2-5be6-ab8c-054775c67674", 00:16:32.469 "is_configured": true, 00:16:32.469 "data_offset": 2048, 00:16:32.469 "data_size": 63488 00:16:32.469 }, 00:16:32.469 { 00:16:32.469 "name": null, 00:16:32.469 "uuid": "22dabe4f-8891-50da-8346-40a159006a78", 00:16:32.469 "is_configured": false, 00:16:32.469 "data_offset": 2048, 00:16:32.469 "data_size": 63488 00:16:32.469 }, 00:16:32.469 { 00:16:32.469 "name": null, 00:16:32.469 "uuid": "45e68091-c295-5b67-81aa-67aae6022625", 00:16:32.469 "is_configured": false, 00:16:32.469 "data_offset": 2048, 00:16:32.469 "data_size": 63488 00:16:32.469 } 00:16:32.469 ] 00:16:32.469 }' 00:16:32.469 07:17:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:32.469 07:17:05 -- common/autotest_common.sh@10 -- # set +x 00:16:33.037 07:17:06 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:16:33.037 07:17:06 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:33.296 [2024-02-13 07:17:06.937782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:33.296 [2024-02-13 07:17:06.937916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.296 [2024-02-13 07:17:06.937993] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:33.296 [2024-02-13 07:17:06.938017] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.296 [2024-02-13 07:17:06.938558] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.296 [2024-02-13 07:17:06.938597] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:33.296 [2024-02-13 07:17:06.938738] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:33.296 [2024-02-13 07:17:06.938773] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.296 pt2 00:16:33.296 07:17:06 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:33.554 [2024-02-13 07:17:07.137794] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:33.554 07:17:07 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:16:33.554 07:17:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:33.554 07:17:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:33.554 07:17:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:33.554 07:17:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:33.554 07:17:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:33.554 07:17:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:33.554 07:17:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:33.554 07:17:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:33.554 07:17:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:33.554 07:17:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.554 07:17:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.813 07:17:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:33.813 "name": "raid_bdev1", 00:16:33.813 "uuid": "d5575945-2c33-4a26-ac22-b7d43af81ec4", 00:16:33.813 "strip_size_kb": 64, 00:16:33.813 "state": "configuring", 00:16:33.813 "raid_level": "concat", 00:16:33.813 "superblock": true, 00:16:33.813 "num_base_bdevs": 3, 00:16:33.813 "num_base_bdevs_discovered": 1, 00:16:33.813 "num_base_bdevs_operational": 3, 00:16:33.813 "base_bdevs_list": [ 00:16:33.813 { 00:16:33.813 "name": "pt1", 00:16:33.813 "uuid": "813375e2-79e2-5be6-ab8c-054775c67674", 00:16:33.813 "is_configured": true, 00:16:33.813 "data_offset": 2048, 00:16:33.813 "data_size": 63488 00:16:33.813 }, 00:16:33.813 { 00:16:33.813 "name": null, 00:16:33.813 "uuid": "22dabe4f-8891-50da-8346-40a159006a78", 00:16:33.813 "is_configured": false, 00:16:33.813 "data_offset": 2048, 00:16:33.813 "data_size": 63488 00:16:33.813 }, 00:16:33.813 { 00:16:33.813 "name": null, 00:16:33.813 "uuid": "45e68091-c295-5b67-81aa-67aae6022625", 00:16:33.813 "is_configured": false, 00:16:33.813 "data_offset": 2048, 00:16:33.813 "data_size": 63488 00:16:33.813 } 00:16:33.813 ] 00:16:33.813 }' 00:16:33.813 07:17:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:33.813 07:17:07 -- common/autotest_common.sh@10 -- # set +x 00:16:34.382 07:17:08 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:16:34.382 07:17:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:34.382 07:17:08 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:34.641 [2024-02-13 07:17:08.246066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:34.641 [2024-02-13 07:17:08.246209] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.641 [2024-02-13 07:17:08.246253] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:34.641 [2024-02-13 07:17:08.246292] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.641 [2024-02-13 07:17:08.246918] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.641 [2024-02-13 07:17:08.246982] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:34.641 [2024-02-13 07:17:08.247107] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:16:34.641 [2024-02-13 07:17:08.247137] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:34.641 pt2 00:16:34.641 07:17:08 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:34.641 07:17:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:34.641 07:17:08 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:34.935 [2024-02-13 07:17:08.498065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:34.935 [2024-02-13 07:17:08.498164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:34.935 [2024-02-13 07:17:08.498200] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:34.935 [2024-02-13 07:17:08.498249] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:34.935 [2024-02-13 07:17:08.498711] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:34.935 [2024-02-13 07:17:08.498760] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:34.935 [2024-02-13 07:17:08.498918] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:16:34.935 [2024-02-13 07:17:08.498946] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:34.935 [2024-02-13 07:17:08.499090] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:16:34.935 [2024-02-13 07:17:08.499117] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:34.935 [2024-02-13 07:17:08.499247] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:16:34.935 [2024-02-13 07:17:08.499631] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:16:34.935 [2024-02-13 07:17:08.499669] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:16:34.935 [2024-02-13 07:17:08.499832] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.935 pt3 00:16:34.935 07:17:08 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:16:34.935 07:17:08 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:16:34.935 07:17:08 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:16:34.935 07:17:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:16:34.935 07:17:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:34.935 07:17:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:16:34.935 07:17:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:16:34.935 07:17:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:34.935 07:17:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:34.935 07:17:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:34.935 07:17:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:34.935 07:17:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:34.935 07:17:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:34.935 07:17:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.193 07:17:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:35.193 "name": "raid_bdev1", 00:16:35.193 "uuid": "d5575945-2c33-4a26-ac22-b7d43af81ec4", 00:16:35.193 "strip_size_kb": 64, 00:16:35.193 "state": "online", 00:16:35.193 "raid_level": "concat", 00:16:35.193 "superblock": true, 00:16:35.193 "num_base_bdevs": 3, 00:16:35.193 "num_base_bdevs_discovered": 3, 00:16:35.193 "num_base_bdevs_operational": 3, 00:16:35.193 "base_bdevs_list": [ 00:16:35.193 { 00:16:35.193 "name": "pt1", 00:16:35.193 "uuid": "813375e2-79e2-5be6-ab8c-054775c67674", 00:16:35.193 "is_configured": true, 00:16:35.193 "data_offset": 2048, 00:16:35.193 "data_size": 63488 00:16:35.193 }, 00:16:35.193 { 00:16:35.193 "name": "pt2", 00:16:35.193 "uuid": "22dabe4f-8891-50da-8346-40a159006a78", 00:16:35.193 "is_configured": true, 00:16:35.193 "data_offset": 2048, 00:16:35.193 "data_size": 63488 00:16:35.193 }, 00:16:35.193 { 00:16:35.193 "name": "pt3", 00:16:35.193 "uuid": "45e68091-c295-5b67-81aa-67aae6022625", 00:16:35.193 "is_configured": true, 00:16:35.193 "data_offset": 2048, 00:16:35.193 "data_size": 63488 00:16:35.193 } 00:16:35.193 ] 00:16:35.193 }' 00:16:35.193 07:17:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:35.193 07:17:08 -- common/autotest_common.sh@10 -- # set +x 00:16:35.760 07:17:09 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:35.760 07:17:09 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:16:36.020 [2024-02-13 07:17:09.558616] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.020 07:17:09 -- bdev/bdev_raid.sh@430 -- # '[' d5575945-2c33-4a26-ac22-b7d43af81ec4 '!=' d5575945-2c33-4a26-ac22-b7d43af81ec4 ']' 00:16:36.020 07:17:09 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:16:36.020 07:17:09 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:36.020 07:17:09 -- bdev/bdev_raid.sh@197 -- # return 1 00:16:36.020 07:17:09 -- bdev/bdev_raid.sh@511 -- # killprocess 121187 00:16:36.020 07:17:09 -- common/autotest_common.sh@924 -- # '[' -z 121187 ']' 00:16:36.020 07:17:09 -- common/autotest_common.sh@928 -- # kill -0 121187 00:16:36.020 07:17:09 -- common/autotest_common.sh@929 -- # uname 00:16:36.020 07:17:09 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:36.020 07:17:09 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 121187 00:16:36.020 07:17:09 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:36.020 07:17:09 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:36.020 07:17:09 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 121187' 00:16:36.020 killing process with pid 121187 00:16:36.020 07:17:09 -- common/autotest_common.sh@943 -- # kill 121187 00:16:36.020 07:17:09 -- common/autotest_common.sh@948 -- # wait 121187 00:16:36.020 [2024-02-13 07:17:09.595755] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:36.020 [2024-02-13 07:17:09.595833] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:36.020 [2024-02-13 07:17:09.595918] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:36.020 [2024-02-13 07:17:09.595944] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:16:36.279 [2024-02-13 07:17:09.796412] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.215 ************************************ 00:16:37.215 END TEST raid_superblock_test 00:16:37.215 ************************************ 00:16:37.215 07:17:10 -- bdev/bdev_raid.sh@513 -- # return 0 00:16:37.215 00:16:37.215 real 0m10.728s 00:16:37.215 user 0m18.809s 00:16:37.215 sys 0m1.253s 00:16:37.215 07:17:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:37.215 07:17:10 -- common/autotest_common.sh@10 -- # set +x 00:16:37.215 07:17:10 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:16:37.215 07:17:10 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:16:37.215 07:17:10 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:16:37.215 07:17:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:37.215 07:17:10 -- common/autotest_common.sh@10 -- # set +x 00:16:37.216 ************************************ 00:16:37.216 START TEST raid_state_function_test 00:16:37.216 ************************************ 00:16:37.216 07:17:10 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid1 3 false 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@226 -- # raid_pid=121525 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121525' 00:16:37.216 Process raid pid: 121525 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121525 /var/tmp/spdk-raid.sock 00:16:37.216 07:17:10 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:37.216 07:17:10 -- common/autotest_common.sh@817 -- # '[' -z 121525 ']' 00:16:37.216 07:17:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:37.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:37.216 07:17:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:37.216 07:17:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:37.216 07:17:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:37.216 07:17:10 -- common/autotest_common.sh@10 -- # set +x 00:16:37.216 [2024-02-13 07:17:10.902372] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:16:37.216 [2024-02-13 07:17:10.902580] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.475 [2024-02-13 07:17:11.063741] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.734 [2024-02-13 07:17:11.255680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.993 [2024-02-13 07:17:11.440113] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.252 07:17:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:38.252 07:17:11 -- common/autotest_common.sh@850 -- # return 0 00:16:38.252 07:17:11 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:38.511 [2024-02-13 07:17:12.061800] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:38.511 [2024-02-13 07:17:12.061925] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:38.511 [2024-02-13 07:17:12.061939] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:38.511 [2024-02-13 07:17:12.061958] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:38.511 [2024-02-13 07:17:12.061966] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:38.511 [2024-02-13 07:17:12.062046] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:38.511 07:17:12 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:38.511 07:17:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:38.511 07:17:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:38.511 07:17:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:38.511 07:17:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:38.511 07:17:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:38.511 07:17:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:38.511 07:17:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:38.511 07:17:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:38.511 07:17:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:38.511 07:17:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:38.511 07:17:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.770 07:17:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:38.770 "name": "Existed_Raid", 00:16:38.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.770 "strip_size_kb": 0, 00:16:38.770 "state": "configuring", 00:16:38.770 "raid_level": "raid1", 00:16:38.770 "superblock": false, 00:16:38.770 "num_base_bdevs": 3, 00:16:38.770 "num_base_bdevs_discovered": 0, 00:16:38.770 "num_base_bdevs_operational": 3, 00:16:38.770 "base_bdevs_list": [ 00:16:38.770 { 00:16:38.770 "name": "BaseBdev1", 00:16:38.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.770 "is_configured": false, 00:16:38.770 "data_offset": 0, 00:16:38.770 "data_size": 0 00:16:38.770 }, 00:16:38.770 { 00:16:38.770 "name": "BaseBdev2", 00:16:38.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.770 "is_configured": false, 00:16:38.770 "data_offset": 0, 00:16:38.770 "data_size": 0 00:16:38.770 }, 00:16:38.770 { 00:16:38.770 "name": "BaseBdev3", 00:16:38.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.770 "is_configured": false, 00:16:38.770 "data_offset": 0, 00:16:38.770 "data_size": 0 00:16:38.770 } 00:16:38.770 ] 00:16:38.770 }' 00:16:38.770 07:17:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:38.770 07:17:12 -- common/autotest_common.sh@10 -- # set +x 00:16:39.338 07:17:12 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:39.596 [2024-02-13 07:17:13.233867] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:39.596 [2024-02-13 07:17:13.233910] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:39.596 07:17:13 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:39.854 [2024-02-13 07:17:13.469954] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:39.854 [2024-02-13 07:17:13.470067] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:39.854 [2024-02-13 07:17:13.470099] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:39.854 [2024-02-13 07:17:13.470129] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:39.854 [2024-02-13 07:17:13.470138] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:39.854 [2024-02-13 07:17:13.470166] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:39.854 07:17:13 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:40.113 [2024-02-13 07:17:13.753309] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:40.113 BaseBdev1 00:16:40.113 07:17:13 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:40.113 07:17:13 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:40.113 07:17:13 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:40.113 07:17:13 -- common/autotest_common.sh@887 -- # local i 00:16:40.113 07:17:13 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:40.113 07:17:13 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:40.113 07:17:13 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:40.372 07:17:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:40.631 [ 00:16:40.632 { 00:16:40.632 "name": "BaseBdev1", 00:16:40.632 "aliases": [ 00:16:40.632 "465d85b5-568c-471d-8dd9-ac6941d07665" 00:16:40.632 ], 00:16:40.632 "product_name": "Malloc disk", 00:16:40.632 "block_size": 512, 00:16:40.632 "num_blocks": 65536, 00:16:40.632 "uuid": "465d85b5-568c-471d-8dd9-ac6941d07665", 00:16:40.632 "assigned_rate_limits": { 00:16:40.632 "rw_ios_per_sec": 0, 00:16:40.632 "rw_mbytes_per_sec": 0, 00:16:40.632 "r_mbytes_per_sec": 0, 00:16:40.632 "w_mbytes_per_sec": 0 00:16:40.632 }, 00:16:40.632 "claimed": true, 00:16:40.632 "claim_type": "exclusive_write", 00:16:40.632 "zoned": false, 00:16:40.632 "supported_io_types": { 00:16:40.632 "read": true, 00:16:40.632 "write": true, 00:16:40.632 "unmap": true, 00:16:40.632 "write_zeroes": true, 00:16:40.632 "flush": true, 00:16:40.632 "reset": true, 00:16:40.632 "compare": false, 00:16:40.632 "compare_and_write": false, 00:16:40.632 "abort": true, 00:16:40.632 "nvme_admin": false, 00:16:40.632 "nvme_io": false 00:16:40.632 }, 00:16:40.632 "memory_domains": [ 00:16:40.632 { 00:16:40.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.632 "dma_device_type": 2 00:16:40.632 } 00:16:40.632 ], 00:16:40.632 "driver_specific": {} 00:16:40.632 } 00:16:40.632 ] 00:16:40.632 07:17:14 -- common/autotest_common.sh@893 -- # return 0 00:16:40.632 07:17:14 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:40.632 07:17:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:40.632 07:17:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:40.632 07:17:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:40.632 07:17:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:40.632 07:17:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:40.632 07:17:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:40.632 07:17:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:40.632 07:17:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:40.632 07:17:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:40.632 07:17:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.632 07:17:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:40.891 07:17:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:40.891 "name": "Existed_Raid", 00:16:40.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.891 "strip_size_kb": 0, 00:16:40.891 "state": "configuring", 00:16:40.891 "raid_level": "raid1", 00:16:40.891 "superblock": false, 00:16:40.891 "num_base_bdevs": 3, 00:16:40.891 "num_base_bdevs_discovered": 1, 00:16:40.891 "num_base_bdevs_operational": 3, 00:16:40.891 "base_bdevs_list": [ 00:16:40.891 { 00:16:40.891 "name": "BaseBdev1", 00:16:40.891 "uuid": "465d85b5-568c-471d-8dd9-ac6941d07665", 00:16:40.891 "is_configured": true, 00:16:40.891 "data_offset": 0, 00:16:40.891 "data_size": 65536 00:16:40.891 }, 00:16:40.891 { 00:16:40.891 "name": "BaseBdev2", 00:16:40.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.891 "is_configured": false, 00:16:40.891 "data_offset": 0, 00:16:40.891 "data_size": 0 00:16:40.891 }, 00:16:40.891 { 00:16:40.891 "name": "BaseBdev3", 00:16:40.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.891 "is_configured": false, 00:16:40.891 "data_offset": 0, 00:16:40.891 "data_size": 0 00:16:40.891 } 00:16:40.891 ] 00:16:40.891 }' 00:16:40.891 07:17:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:40.891 07:17:14 -- common/autotest_common.sh@10 -- # set +x 00:16:41.458 07:17:15 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:41.720 [2024-02-13 07:17:15.241728] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:41.720 [2024-02-13 07:17:15.241808] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:41.720 07:17:15 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:16:41.720 07:17:15 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:41.982 [2024-02-13 07:17:15.509785] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.982 [2024-02-13 07:17:15.511875] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:41.982 [2024-02-13 07:17:15.511965] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:41.982 [2024-02-13 07:17:15.511995] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:41.982 [2024-02-13 07:17:15.512022] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:41.982 07:17:15 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:41.982 07:17:15 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:41.982 07:17:15 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:41.982 07:17:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:41.982 07:17:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:41.982 07:17:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:41.982 07:17:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:41.982 07:17:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:41.982 07:17:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:41.982 07:17:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:41.982 07:17:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:41.982 07:17:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:41.982 07:17:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.982 07:17:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:42.241 07:17:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:42.241 "name": "Existed_Raid", 00:16:42.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.241 "strip_size_kb": 0, 00:16:42.241 "state": "configuring", 00:16:42.241 "raid_level": "raid1", 00:16:42.241 "superblock": false, 00:16:42.241 "num_base_bdevs": 3, 00:16:42.241 "num_base_bdevs_discovered": 1, 00:16:42.241 "num_base_bdevs_operational": 3, 00:16:42.241 "base_bdevs_list": [ 00:16:42.241 { 00:16:42.241 "name": "BaseBdev1", 00:16:42.241 "uuid": "465d85b5-568c-471d-8dd9-ac6941d07665", 00:16:42.241 "is_configured": true, 00:16:42.241 "data_offset": 0, 00:16:42.241 "data_size": 65536 00:16:42.241 }, 00:16:42.241 { 00:16:42.241 "name": "BaseBdev2", 00:16:42.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.241 "is_configured": false, 00:16:42.241 "data_offset": 0, 00:16:42.241 "data_size": 0 00:16:42.241 }, 00:16:42.241 { 00:16:42.241 "name": "BaseBdev3", 00:16:42.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.241 "is_configured": false, 00:16:42.241 "data_offset": 0, 00:16:42.241 "data_size": 0 00:16:42.241 } 00:16:42.241 ] 00:16:42.241 }' 00:16:42.241 07:17:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:42.241 07:17:15 -- common/autotest_common.sh@10 -- # set +x 00:16:42.809 07:17:16 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:43.067 [2024-02-13 07:17:16.633417] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.067 BaseBdev2 00:16:43.067 07:17:16 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:43.067 07:17:16 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:16:43.067 07:17:16 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:43.067 07:17:16 -- common/autotest_common.sh@887 -- # local i 00:16:43.067 07:17:16 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:43.067 07:17:16 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:43.067 07:17:16 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:43.326 07:17:16 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:43.585 [ 00:16:43.585 { 00:16:43.585 "name": "BaseBdev2", 00:16:43.585 "aliases": [ 00:16:43.585 "285864ba-183d-45c6-874b-c2c56849b9f1" 00:16:43.585 ], 00:16:43.585 "product_name": "Malloc disk", 00:16:43.585 "block_size": 512, 00:16:43.585 "num_blocks": 65536, 00:16:43.585 "uuid": "285864ba-183d-45c6-874b-c2c56849b9f1", 00:16:43.585 "assigned_rate_limits": { 00:16:43.585 "rw_ios_per_sec": 0, 00:16:43.585 "rw_mbytes_per_sec": 0, 00:16:43.585 "r_mbytes_per_sec": 0, 00:16:43.585 "w_mbytes_per_sec": 0 00:16:43.585 }, 00:16:43.585 "claimed": true, 00:16:43.585 "claim_type": "exclusive_write", 00:16:43.585 "zoned": false, 00:16:43.585 "supported_io_types": { 00:16:43.585 "read": true, 00:16:43.585 "write": true, 00:16:43.585 "unmap": true, 00:16:43.585 "write_zeroes": true, 00:16:43.585 "flush": true, 00:16:43.585 "reset": true, 00:16:43.585 "compare": false, 00:16:43.585 "compare_and_write": false, 00:16:43.585 "abort": true, 00:16:43.585 "nvme_admin": false, 00:16:43.585 "nvme_io": false 00:16:43.585 }, 00:16:43.585 "memory_domains": [ 00:16:43.585 { 00:16:43.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.585 "dma_device_type": 2 00:16:43.585 } 00:16:43.585 ], 00:16:43.585 "driver_specific": {} 00:16:43.585 } 00:16:43.585 ] 00:16:43.585 07:17:17 -- common/autotest_common.sh@893 -- # return 0 00:16:43.585 07:17:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:43.585 07:17:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:43.585 07:17:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:43.585 07:17:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:43.585 07:17:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:43.585 07:17:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:43.585 07:17:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:43.585 07:17:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:43.585 07:17:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:43.585 07:17:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:43.585 07:17:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:43.585 07:17:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:43.585 07:17:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.585 07:17:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.844 07:17:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:43.844 "name": "Existed_Raid", 00:16:43.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.844 "strip_size_kb": 0, 00:16:43.844 "state": "configuring", 00:16:43.844 "raid_level": "raid1", 00:16:43.844 "superblock": false, 00:16:43.844 "num_base_bdevs": 3, 00:16:43.844 "num_base_bdevs_discovered": 2, 00:16:43.844 "num_base_bdevs_operational": 3, 00:16:43.844 "base_bdevs_list": [ 00:16:43.844 { 00:16:43.844 "name": "BaseBdev1", 00:16:43.844 "uuid": "465d85b5-568c-471d-8dd9-ac6941d07665", 00:16:43.844 "is_configured": true, 00:16:43.844 "data_offset": 0, 00:16:43.844 "data_size": 65536 00:16:43.844 }, 00:16:43.844 { 00:16:43.844 "name": "BaseBdev2", 00:16:43.844 "uuid": "285864ba-183d-45c6-874b-c2c56849b9f1", 00:16:43.844 "is_configured": true, 00:16:43.844 "data_offset": 0, 00:16:43.844 "data_size": 65536 00:16:43.844 }, 00:16:43.844 { 00:16:43.844 "name": "BaseBdev3", 00:16:43.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.844 "is_configured": false, 00:16:43.844 "data_offset": 0, 00:16:43.844 "data_size": 0 00:16:43.844 } 00:16:43.844 ] 00:16:43.844 }' 00:16:43.844 07:17:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:43.844 07:17:17 -- common/autotest_common.sh@10 -- # set +x 00:16:44.412 07:17:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:44.671 [2024-02-13 07:17:18.278326] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:44.671 [2024-02-13 07:17:18.278411] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:16:44.671 [2024-02-13 07:17:18.278421] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:44.671 [2024-02-13 07:17:18.278551] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:16:44.671 [2024-02-13 07:17:18.279029] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:16:44.671 [2024-02-13 07:17:18.279053] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:16:44.671 [2024-02-13 07:17:18.279391] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.671 BaseBdev3 00:16:44.671 07:17:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:44.671 07:17:18 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:16:44.671 07:17:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:44.671 07:17:18 -- common/autotest_common.sh@887 -- # local i 00:16:44.671 07:17:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:44.671 07:17:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:44.671 07:17:18 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:44.930 07:17:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:45.189 [ 00:16:45.189 { 00:16:45.189 "name": "BaseBdev3", 00:16:45.189 "aliases": [ 00:16:45.189 "377fd71a-c194-462f-8d2e-80a8d9c61beb" 00:16:45.189 ], 00:16:45.189 "product_name": "Malloc disk", 00:16:45.189 "block_size": 512, 00:16:45.189 "num_blocks": 65536, 00:16:45.189 "uuid": "377fd71a-c194-462f-8d2e-80a8d9c61beb", 00:16:45.189 "assigned_rate_limits": { 00:16:45.189 "rw_ios_per_sec": 0, 00:16:45.189 "rw_mbytes_per_sec": 0, 00:16:45.189 "r_mbytes_per_sec": 0, 00:16:45.189 "w_mbytes_per_sec": 0 00:16:45.189 }, 00:16:45.189 "claimed": true, 00:16:45.189 "claim_type": "exclusive_write", 00:16:45.189 "zoned": false, 00:16:45.189 "supported_io_types": { 00:16:45.189 "read": true, 00:16:45.189 "write": true, 00:16:45.189 "unmap": true, 00:16:45.189 "write_zeroes": true, 00:16:45.189 "flush": true, 00:16:45.189 "reset": true, 00:16:45.189 "compare": false, 00:16:45.189 "compare_and_write": false, 00:16:45.189 "abort": true, 00:16:45.189 "nvme_admin": false, 00:16:45.189 "nvme_io": false 00:16:45.189 }, 00:16:45.189 "memory_domains": [ 00:16:45.189 { 00:16:45.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.189 "dma_device_type": 2 00:16:45.189 } 00:16:45.189 ], 00:16:45.189 "driver_specific": {} 00:16:45.189 } 00:16:45.189 ] 00:16:45.189 07:17:18 -- common/autotest_common.sh@893 -- # return 0 00:16:45.189 07:17:18 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:45.189 07:17:18 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:45.189 07:17:18 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:45.189 07:17:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:45.189 07:17:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:45.189 07:17:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:45.189 07:17:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:45.189 07:17:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:45.189 07:17:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:45.189 07:17:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:45.189 07:17:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:45.189 07:17:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:45.189 07:17:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.189 07:17:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.448 07:17:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:45.448 "name": "Existed_Raid", 00:16:45.448 "uuid": "73504d1d-73ae-49ea-a7a8-ab9cc39169de", 00:16:45.448 "strip_size_kb": 0, 00:16:45.448 "state": "online", 00:16:45.448 "raid_level": "raid1", 00:16:45.448 "superblock": false, 00:16:45.448 "num_base_bdevs": 3, 00:16:45.448 "num_base_bdevs_discovered": 3, 00:16:45.448 "num_base_bdevs_operational": 3, 00:16:45.448 "base_bdevs_list": [ 00:16:45.448 { 00:16:45.448 "name": "BaseBdev1", 00:16:45.448 "uuid": "465d85b5-568c-471d-8dd9-ac6941d07665", 00:16:45.448 "is_configured": true, 00:16:45.448 "data_offset": 0, 00:16:45.448 "data_size": 65536 00:16:45.448 }, 00:16:45.448 { 00:16:45.448 "name": "BaseBdev2", 00:16:45.448 "uuid": "285864ba-183d-45c6-874b-c2c56849b9f1", 00:16:45.448 "is_configured": true, 00:16:45.448 "data_offset": 0, 00:16:45.448 "data_size": 65536 00:16:45.448 }, 00:16:45.448 { 00:16:45.448 "name": "BaseBdev3", 00:16:45.448 "uuid": "377fd71a-c194-462f-8d2e-80a8d9c61beb", 00:16:45.448 "is_configured": true, 00:16:45.448 "data_offset": 0, 00:16:45.448 "data_size": 65536 00:16:45.448 } 00:16:45.448 ] 00:16:45.448 }' 00:16:45.448 07:17:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:45.448 07:17:18 -- common/autotest_common.sh@10 -- # set +x 00:16:46.016 07:17:19 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:46.274 [2024-02-13 07:17:19.774781] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:46.274 07:17:19 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:46.274 07:17:19 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:46.274 07:17:19 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:46.274 07:17:19 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:46.275 07:17:19 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:46.275 07:17:19 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:46.275 07:17:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:46.275 07:17:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:46.275 07:17:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:46.275 07:17:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:46.275 07:17:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:46.275 07:17:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:46.275 07:17:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:46.275 07:17:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:46.275 07:17:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:46.275 07:17:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:46.275 07:17:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:46.533 07:17:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:46.534 "name": "Existed_Raid", 00:16:46.534 "uuid": "73504d1d-73ae-49ea-a7a8-ab9cc39169de", 00:16:46.534 "strip_size_kb": 0, 00:16:46.534 "state": "online", 00:16:46.534 "raid_level": "raid1", 00:16:46.534 "superblock": false, 00:16:46.534 "num_base_bdevs": 3, 00:16:46.534 "num_base_bdevs_discovered": 2, 00:16:46.534 "num_base_bdevs_operational": 2, 00:16:46.534 "base_bdevs_list": [ 00:16:46.534 { 00:16:46.534 "name": null, 00:16:46.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.534 "is_configured": false, 00:16:46.534 "data_offset": 0, 00:16:46.534 "data_size": 65536 00:16:46.534 }, 00:16:46.534 { 00:16:46.534 "name": "BaseBdev2", 00:16:46.534 "uuid": "285864ba-183d-45c6-874b-c2c56849b9f1", 00:16:46.534 "is_configured": true, 00:16:46.534 "data_offset": 0, 00:16:46.534 "data_size": 65536 00:16:46.534 }, 00:16:46.534 { 00:16:46.534 "name": "BaseBdev3", 00:16:46.534 "uuid": "377fd71a-c194-462f-8d2e-80a8d9c61beb", 00:16:46.534 "is_configured": true, 00:16:46.534 "data_offset": 0, 00:16:46.534 "data_size": 65536 00:16:46.534 } 00:16:46.534 ] 00:16:46.534 }' 00:16:46.534 07:17:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:46.534 07:17:20 -- common/autotest_common.sh@10 -- # set +x 00:16:47.101 07:17:20 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:16:47.101 07:17:20 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:47.101 07:17:20 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.101 07:17:20 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:47.360 07:17:20 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:47.360 07:17:20 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:47.360 07:17:20 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:47.618 [2024-02-13 07:17:21.094965] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:47.618 07:17:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:47.618 07:17:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:47.618 07:17:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:16:47.618 07:17:21 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.877 07:17:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:16:47.877 07:17:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:47.877 07:17:21 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:48.136 [2024-02-13 07:17:21.633544] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:48.136 [2024-02-13 07:17:21.633600] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.136 [2024-02-13 07:17:21.633685] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.136 [2024-02-13 07:17:21.705303] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.136 [2024-02-13 07:17:21.705353] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:16:48.136 07:17:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:16:48.136 07:17:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:16:48.136 07:17:21 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:48.136 07:17:21 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:16:48.395 07:17:21 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:16:48.395 07:17:21 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:16:48.395 07:17:21 -- bdev/bdev_raid.sh@287 -- # killprocess 121525 00:16:48.395 07:17:21 -- common/autotest_common.sh@924 -- # '[' -z 121525 ']' 00:16:48.395 07:17:21 -- common/autotest_common.sh@928 -- # kill -0 121525 00:16:48.395 07:17:21 -- common/autotest_common.sh@929 -- # uname 00:16:48.395 07:17:21 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:16:48.395 07:17:21 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 121525 00:16:48.395 07:17:21 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:16:48.395 killing process with pid 121525 00:16:48.395 07:17:21 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:16:48.395 07:17:21 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 121525' 00:16:48.395 07:17:21 -- common/autotest_common.sh@943 -- # kill 121525 00:16:48.395 07:17:21 -- common/autotest_common.sh@948 -- # wait 121525 00:16:48.395 [2024-02-13 07:17:21.993260] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:48.395 [2024-02-13 07:17:21.993399] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:49.356 ************************************ 00:16:49.356 END TEST raid_state_function_test 00:16:49.356 ************************************ 00:16:49.356 07:17:22 -- bdev/bdev_raid.sh@289 -- # return 0 00:16:49.356 00:16:49.356 real 0m12.147s 00:16:49.356 user 0m21.432s 00:16:49.356 sys 0m1.494s 00:16:49.356 07:17:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:49.356 07:17:22 -- common/autotest_common.sh@10 -- # set +x 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:16:49.356 07:17:23 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:16:49.356 07:17:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:16:49.356 07:17:23 -- common/autotest_common.sh@10 -- # set +x 00:16:49.356 ************************************ 00:16:49.356 START TEST raid_state_function_test_sb 00:16:49.356 ************************************ 00:16:49.356 07:17:23 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid1 3 true 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:16:49.356 07:17:23 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:16:49.615 07:17:23 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:16:49.615 07:17:23 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:16:49.615 07:17:23 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:16:49.615 07:17:23 -- bdev/bdev_raid.sh@226 -- # raid_pid=121932 00:16:49.615 07:17:23 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 121932' 00:16:49.615 Process raid pid: 121932 00:16:49.615 07:17:23 -- bdev/bdev_raid.sh@228 -- # waitforlisten 121932 /var/tmp/spdk-raid.sock 00:16:49.616 07:17:23 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:49.616 07:17:23 -- common/autotest_common.sh@817 -- # '[' -z 121932 ']' 00:16:49.616 07:17:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:49.616 07:17:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:49.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:49.616 07:17:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:49.616 07:17:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:49.616 07:17:23 -- common/autotest_common.sh@10 -- # set +x 00:16:49.616 [2024-02-13 07:17:23.119466] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:16:49.616 [2024-02-13 07:17:23.119776] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:49.874 [2024-02-13 07:17:23.316449] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.874 [2024-02-13 07:17:23.511884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.132 [2024-02-13 07:17:23.690444] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:50.391 07:17:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:50.391 07:17:23 -- common/autotest_common.sh@850 -- # return 0 00:16:50.391 07:17:23 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:50.649 [2024-02-13 07:17:24.191222] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:50.649 [2024-02-13 07:17:24.191334] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:50.649 [2024-02-13 07:17:24.191348] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:50.649 [2024-02-13 07:17:24.191368] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.649 [2024-02-13 07:17:24.191375] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:50.649 [2024-02-13 07:17:24.191417] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:50.649 07:17:24 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:50.649 07:17:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:50.649 07:17:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:50.649 07:17:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:50.649 07:17:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:50.649 07:17:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:50.649 07:17:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:50.649 07:17:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:50.649 07:17:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:50.649 07:17:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:50.649 07:17:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:50.649 07:17:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.908 07:17:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:50.908 "name": "Existed_Raid", 00:16:50.908 "uuid": "14000369-390b-4321-85a2-572d52d6aadb", 00:16:50.908 "strip_size_kb": 0, 00:16:50.908 "state": "configuring", 00:16:50.908 "raid_level": "raid1", 00:16:50.908 "superblock": true, 00:16:50.908 "num_base_bdevs": 3, 00:16:50.908 "num_base_bdevs_discovered": 0, 00:16:50.908 "num_base_bdevs_operational": 3, 00:16:50.908 "base_bdevs_list": [ 00:16:50.908 { 00:16:50.908 "name": "BaseBdev1", 00:16:50.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.908 "is_configured": false, 00:16:50.908 "data_offset": 0, 00:16:50.908 "data_size": 0 00:16:50.908 }, 00:16:50.908 { 00:16:50.908 "name": "BaseBdev2", 00:16:50.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.908 "is_configured": false, 00:16:50.908 "data_offset": 0, 00:16:50.908 "data_size": 0 00:16:50.908 }, 00:16:50.908 { 00:16:50.908 "name": "BaseBdev3", 00:16:50.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.908 "is_configured": false, 00:16:50.908 "data_offset": 0, 00:16:50.908 "data_size": 0 00:16:50.908 } 00:16:50.908 ] 00:16:50.908 }' 00:16:50.908 07:17:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:50.908 07:17:24 -- common/autotest_common.sh@10 -- # set +x 00:16:51.487 07:17:25 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:51.745 [2024-02-13 07:17:25.271262] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:51.745 [2024-02-13 07:17:25.271315] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:16:51.745 07:17:25 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:52.003 [2024-02-13 07:17:25.463376] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:52.003 [2024-02-13 07:17:25.463472] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:52.003 [2024-02-13 07:17:25.463485] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.003 [2024-02-13 07:17:25.463512] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.003 [2024-02-13 07:17:25.463520] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:52.003 [2024-02-13 07:17:25.463544] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:52.003 07:17:25 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:52.003 [2024-02-13 07:17:25.677086] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.003 BaseBdev1 00:16:52.003 07:17:25 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:16:52.003 07:17:25 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:52.003 07:17:25 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:52.003 07:17:25 -- common/autotest_common.sh@887 -- # local i 00:16:52.003 07:17:25 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:52.003 07:17:25 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:52.003 07:17:25 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:52.261 07:17:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:52.519 [ 00:16:52.519 { 00:16:52.519 "name": "BaseBdev1", 00:16:52.520 "aliases": [ 00:16:52.520 "1ba487a2-b936-48cf-b12d-da26d14580db" 00:16:52.520 ], 00:16:52.520 "product_name": "Malloc disk", 00:16:52.520 "block_size": 512, 00:16:52.520 "num_blocks": 65536, 00:16:52.520 "uuid": "1ba487a2-b936-48cf-b12d-da26d14580db", 00:16:52.520 "assigned_rate_limits": { 00:16:52.520 "rw_ios_per_sec": 0, 00:16:52.520 "rw_mbytes_per_sec": 0, 00:16:52.520 "r_mbytes_per_sec": 0, 00:16:52.520 "w_mbytes_per_sec": 0 00:16:52.520 }, 00:16:52.520 "claimed": true, 00:16:52.520 "claim_type": "exclusive_write", 00:16:52.520 "zoned": false, 00:16:52.520 "supported_io_types": { 00:16:52.520 "read": true, 00:16:52.520 "write": true, 00:16:52.520 "unmap": true, 00:16:52.520 "write_zeroes": true, 00:16:52.520 "flush": true, 00:16:52.520 "reset": true, 00:16:52.520 "compare": false, 00:16:52.520 "compare_and_write": false, 00:16:52.520 "abort": true, 00:16:52.520 "nvme_admin": false, 00:16:52.520 "nvme_io": false 00:16:52.520 }, 00:16:52.520 "memory_domains": [ 00:16:52.520 { 00:16:52.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:52.520 "dma_device_type": 2 00:16:52.520 } 00:16:52.520 ], 00:16:52.520 "driver_specific": {} 00:16:52.520 } 00:16:52.520 ] 00:16:52.520 07:17:26 -- common/autotest_common.sh@893 -- # return 0 00:16:52.520 07:17:26 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:52.520 07:17:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:52.520 07:17:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:52.520 07:17:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:52.520 07:17:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:52.520 07:17:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:52.520 07:17:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:52.520 07:17:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:52.520 07:17:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:52.520 07:17:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:52.520 07:17:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.520 07:17:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.778 07:17:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:52.778 "name": "Existed_Raid", 00:16:52.778 "uuid": "1cd1e685-3bea-478c-a192-90e992c1c466", 00:16:52.778 "strip_size_kb": 0, 00:16:52.778 "state": "configuring", 00:16:52.778 "raid_level": "raid1", 00:16:52.778 "superblock": true, 00:16:52.778 "num_base_bdevs": 3, 00:16:52.778 "num_base_bdevs_discovered": 1, 00:16:52.778 "num_base_bdevs_operational": 3, 00:16:52.778 "base_bdevs_list": [ 00:16:52.778 { 00:16:52.778 "name": "BaseBdev1", 00:16:52.778 "uuid": "1ba487a2-b936-48cf-b12d-da26d14580db", 00:16:52.778 "is_configured": true, 00:16:52.778 "data_offset": 2048, 00:16:52.778 "data_size": 63488 00:16:52.778 }, 00:16:52.778 { 00:16:52.778 "name": "BaseBdev2", 00:16:52.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.778 "is_configured": false, 00:16:52.778 "data_offset": 0, 00:16:52.778 "data_size": 0 00:16:52.778 }, 00:16:52.778 { 00:16:52.778 "name": "BaseBdev3", 00:16:52.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.778 "is_configured": false, 00:16:52.778 "data_offset": 0, 00:16:52.778 "data_size": 0 00:16:52.778 } 00:16:52.778 ] 00:16:52.778 }' 00:16:52.778 07:17:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:52.778 07:17:26 -- common/autotest_common.sh@10 -- # set +x 00:16:53.715 07:17:27 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:53.715 [2024-02-13 07:17:27.249576] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:53.715 [2024-02-13 07:17:27.249664] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:16:53.715 07:17:27 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:16:53.715 07:17:27 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:53.974 07:17:27 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:54.233 BaseBdev1 00:16:54.234 07:17:27 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:16:54.234 07:17:27 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:16:54.234 07:17:27 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:54.234 07:17:27 -- common/autotest_common.sh@887 -- # local i 00:16:54.234 07:17:27 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:54.234 07:17:27 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:54.234 07:17:27 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:54.493 07:17:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:54.493 [ 00:16:54.493 { 00:16:54.493 "name": "BaseBdev1", 00:16:54.493 "aliases": [ 00:16:54.493 "a3a060c9-a979-4a5d-b782-5d19c07259aa" 00:16:54.493 ], 00:16:54.493 "product_name": "Malloc disk", 00:16:54.493 "block_size": 512, 00:16:54.493 "num_blocks": 65536, 00:16:54.493 "uuid": "a3a060c9-a979-4a5d-b782-5d19c07259aa", 00:16:54.493 "assigned_rate_limits": { 00:16:54.493 "rw_ios_per_sec": 0, 00:16:54.493 "rw_mbytes_per_sec": 0, 00:16:54.493 "r_mbytes_per_sec": 0, 00:16:54.493 "w_mbytes_per_sec": 0 00:16:54.493 }, 00:16:54.493 "claimed": false, 00:16:54.493 "zoned": false, 00:16:54.493 "supported_io_types": { 00:16:54.493 "read": true, 00:16:54.493 "write": true, 00:16:54.493 "unmap": true, 00:16:54.493 "write_zeroes": true, 00:16:54.493 "flush": true, 00:16:54.493 "reset": true, 00:16:54.493 "compare": false, 00:16:54.493 "compare_and_write": false, 00:16:54.493 "abort": true, 00:16:54.493 "nvme_admin": false, 00:16:54.493 "nvme_io": false 00:16:54.493 }, 00:16:54.493 "memory_domains": [ 00:16:54.493 { 00:16:54.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.493 "dma_device_type": 2 00:16:54.493 } 00:16:54.493 ], 00:16:54.493 "driver_specific": {} 00:16:54.493 } 00:16:54.493 ] 00:16:54.493 07:17:28 -- common/autotest_common.sh@893 -- # return 0 00:16:54.493 07:17:28 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:16:54.752 [2024-02-13 07:17:28.361830] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.752 [2024-02-13 07:17:28.363525] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:54.752 [2024-02-13 07:17:28.363597] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:54.752 [2024-02-13 07:17:28.363626] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:54.752 [2024-02-13 07:17:28.363649] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:54.752 07:17:28 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:16:54.752 07:17:28 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:54.752 07:17:28 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:54.752 07:17:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:54.752 07:17:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:54.752 07:17:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:54.752 07:17:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:54.752 07:17:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:54.752 07:17:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:54.753 07:17:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:54.753 07:17:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:54.753 07:17:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:54.753 07:17:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:54.753 07:17:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.011 07:17:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:55.011 "name": "Existed_Raid", 00:16:55.011 "uuid": "479a7333-baa9-4bcd-86c4-acc6e9e3fe6c", 00:16:55.011 "strip_size_kb": 0, 00:16:55.011 "state": "configuring", 00:16:55.011 "raid_level": "raid1", 00:16:55.011 "superblock": true, 00:16:55.011 "num_base_bdevs": 3, 00:16:55.011 "num_base_bdevs_discovered": 1, 00:16:55.011 "num_base_bdevs_operational": 3, 00:16:55.011 "base_bdevs_list": [ 00:16:55.011 { 00:16:55.011 "name": "BaseBdev1", 00:16:55.011 "uuid": "a3a060c9-a979-4a5d-b782-5d19c07259aa", 00:16:55.011 "is_configured": true, 00:16:55.011 "data_offset": 2048, 00:16:55.011 "data_size": 63488 00:16:55.011 }, 00:16:55.011 { 00:16:55.011 "name": "BaseBdev2", 00:16:55.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.011 "is_configured": false, 00:16:55.011 "data_offset": 0, 00:16:55.011 "data_size": 0 00:16:55.011 }, 00:16:55.011 { 00:16:55.011 "name": "BaseBdev3", 00:16:55.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.011 "is_configured": false, 00:16:55.011 "data_offset": 0, 00:16:55.011 "data_size": 0 00:16:55.011 } 00:16:55.011 ] 00:16:55.011 }' 00:16:55.011 07:17:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:55.011 07:17:28 -- common/autotest_common.sh@10 -- # set +x 00:16:55.948 07:17:29 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:55.948 [2024-02-13 07:17:29.510977] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:55.948 BaseBdev2 00:16:55.948 07:17:29 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:16:55.948 07:17:29 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:16:55.948 07:17:29 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:55.948 07:17:29 -- common/autotest_common.sh@887 -- # local i 00:16:55.948 07:17:29 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:55.948 07:17:29 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:55.948 07:17:29 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:56.207 07:17:29 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:56.466 [ 00:16:56.466 { 00:16:56.466 "name": "BaseBdev2", 00:16:56.466 "aliases": [ 00:16:56.466 "c16b519b-e1f7-4483-a75e-52df163c8a03" 00:16:56.466 ], 00:16:56.466 "product_name": "Malloc disk", 00:16:56.466 "block_size": 512, 00:16:56.466 "num_blocks": 65536, 00:16:56.466 "uuid": "c16b519b-e1f7-4483-a75e-52df163c8a03", 00:16:56.466 "assigned_rate_limits": { 00:16:56.466 "rw_ios_per_sec": 0, 00:16:56.466 "rw_mbytes_per_sec": 0, 00:16:56.466 "r_mbytes_per_sec": 0, 00:16:56.466 "w_mbytes_per_sec": 0 00:16:56.466 }, 00:16:56.466 "claimed": true, 00:16:56.466 "claim_type": "exclusive_write", 00:16:56.466 "zoned": false, 00:16:56.466 "supported_io_types": { 00:16:56.466 "read": true, 00:16:56.466 "write": true, 00:16:56.466 "unmap": true, 00:16:56.466 "write_zeroes": true, 00:16:56.466 "flush": true, 00:16:56.466 "reset": true, 00:16:56.466 "compare": false, 00:16:56.466 "compare_and_write": false, 00:16:56.466 "abort": true, 00:16:56.466 "nvme_admin": false, 00:16:56.466 "nvme_io": false 00:16:56.466 }, 00:16:56.466 "memory_domains": [ 00:16:56.466 { 00:16:56.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.466 "dma_device_type": 2 00:16:56.466 } 00:16:56.466 ], 00:16:56.466 "driver_specific": {} 00:16:56.466 } 00:16:56.466 ] 00:16:56.466 07:17:29 -- common/autotest_common.sh@893 -- # return 0 00:16:56.466 07:17:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:56.466 07:17:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:56.466 07:17:29 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:16:56.466 07:17:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:56.466 07:17:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:16:56.466 07:17:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:56.466 07:17:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:56.466 07:17:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:56.466 07:17:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:56.466 07:17:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:56.466 07:17:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:56.466 07:17:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:56.466 07:17:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.466 07:17:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.725 07:17:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:56.725 "name": "Existed_Raid", 00:16:56.725 "uuid": "479a7333-baa9-4bcd-86c4-acc6e9e3fe6c", 00:16:56.725 "strip_size_kb": 0, 00:16:56.725 "state": "configuring", 00:16:56.725 "raid_level": "raid1", 00:16:56.725 "superblock": true, 00:16:56.725 "num_base_bdevs": 3, 00:16:56.725 "num_base_bdevs_discovered": 2, 00:16:56.725 "num_base_bdevs_operational": 3, 00:16:56.725 "base_bdevs_list": [ 00:16:56.725 { 00:16:56.725 "name": "BaseBdev1", 00:16:56.725 "uuid": "a3a060c9-a979-4a5d-b782-5d19c07259aa", 00:16:56.725 "is_configured": true, 00:16:56.725 "data_offset": 2048, 00:16:56.725 "data_size": 63488 00:16:56.725 }, 00:16:56.725 { 00:16:56.725 "name": "BaseBdev2", 00:16:56.725 "uuid": "c16b519b-e1f7-4483-a75e-52df163c8a03", 00:16:56.725 "is_configured": true, 00:16:56.725 "data_offset": 2048, 00:16:56.725 "data_size": 63488 00:16:56.725 }, 00:16:56.725 { 00:16:56.725 "name": "BaseBdev3", 00:16:56.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.725 "is_configured": false, 00:16:56.725 "data_offset": 0, 00:16:56.725 "data_size": 0 00:16:56.725 } 00:16:56.725 ] 00:16:56.725 }' 00:16:56.725 07:17:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:56.725 07:17:30 -- common/autotest_common.sh@10 -- # set +x 00:16:57.293 07:17:30 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:57.553 [2024-02-13 07:17:31.116554] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:57.553 BaseBdev3 00:16:57.553 [2024-02-13 07:17:31.116832] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:16:57.553 [2024-02-13 07:17:31.116847] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:57.553 [2024-02-13 07:17:31.116962] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:16:57.553 [2024-02-13 07:17:31.117341] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:16:57.553 [2024-02-13 07:17:31.117356] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:16:57.553 [2024-02-13 07:17:31.117524] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.553 07:17:31 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:16:57.553 07:17:31 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:16:57.553 07:17:31 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:57.553 07:17:31 -- common/autotest_common.sh@887 -- # local i 00:16:57.553 07:17:31 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:57.553 07:17:31 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:57.553 07:17:31 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:57.812 07:17:31 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:57.812 [ 00:16:57.812 { 00:16:57.812 "name": "BaseBdev3", 00:16:57.812 "aliases": [ 00:16:57.812 "4b276bc7-14a7-4dfd-882b-955a7ea9dacf" 00:16:57.812 ], 00:16:57.812 "product_name": "Malloc disk", 00:16:57.812 "block_size": 512, 00:16:57.812 "num_blocks": 65536, 00:16:57.812 "uuid": "4b276bc7-14a7-4dfd-882b-955a7ea9dacf", 00:16:57.812 "assigned_rate_limits": { 00:16:57.812 "rw_ios_per_sec": 0, 00:16:57.812 "rw_mbytes_per_sec": 0, 00:16:57.812 "r_mbytes_per_sec": 0, 00:16:57.812 "w_mbytes_per_sec": 0 00:16:57.812 }, 00:16:57.812 "claimed": true, 00:16:57.812 "claim_type": "exclusive_write", 00:16:57.812 "zoned": false, 00:16:57.812 "supported_io_types": { 00:16:57.812 "read": true, 00:16:57.812 "write": true, 00:16:57.812 "unmap": true, 00:16:57.812 "write_zeroes": true, 00:16:57.812 "flush": true, 00:16:57.812 "reset": true, 00:16:57.812 "compare": false, 00:16:57.812 "compare_and_write": false, 00:16:57.812 "abort": true, 00:16:57.812 "nvme_admin": false, 00:16:57.812 "nvme_io": false 00:16:57.812 }, 00:16:57.812 "memory_domains": [ 00:16:57.812 { 00:16:57.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.812 "dma_device_type": 2 00:16:57.812 } 00:16:57.812 ], 00:16:57.812 "driver_specific": {} 00:16:57.812 } 00:16:57.812 ] 00:16:58.073 07:17:31 -- common/autotest_common.sh@893 -- # return 0 00:16:58.073 07:17:31 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:16:58.073 07:17:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:16:58.073 07:17:31 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:58.073 07:17:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:58.073 07:17:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:58.073 07:17:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:58.073 07:17:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:58.073 07:17:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:16:58.073 07:17:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:58.073 07:17:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:58.073 07:17:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:58.073 07:17:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:58.073 07:17:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.073 07:17:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:58.073 07:17:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:58.073 "name": "Existed_Raid", 00:16:58.073 "uuid": "479a7333-baa9-4bcd-86c4-acc6e9e3fe6c", 00:16:58.073 "strip_size_kb": 0, 00:16:58.073 "state": "online", 00:16:58.073 "raid_level": "raid1", 00:16:58.073 "superblock": true, 00:16:58.073 "num_base_bdevs": 3, 00:16:58.073 "num_base_bdevs_discovered": 3, 00:16:58.073 "num_base_bdevs_operational": 3, 00:16:58.073 "base_bdevs_list": [ 00:16:58.073 { 00:16:58.073 "name": "BaseBdev1", 00:16:58.073 "uuid": "a3a060c9-a979-4a5d-b782-5d19c07259aa", 00:16:58.073 "is_configured": true, 00:16:58.073 "data_offset": 2048, 00:16:58.073 "data_size": 63488 00:16:58.073 }, 00:16:58.073 { 00:16:58.073 "name": "BaseBdev2", 00:16:58.073 "uuid": "c16b519b-e1f7-4483-a75e-52df163c8a03", 00:16:58.073 "is_configured": true, 00:16:58.073 "data_offset": 2048, 00:16:58.073 "data_size": 63488 00:16:58.073 }, 00:16:58.073 { 00:16:58.073 "name": "BaseBdev3", 00:16:58.073 "uuid": "4b276bc7-14a7-4dfd-882b-955a7ea9dacf", 00:16:58.073 "is_configured": true, 00:16:58.073 "data_offset": 2048, 00:16:58.073 "data_size": 63488 00:16:58.073 } 00:16:58.073 ] 00:16:58.073 }' 00:16:58.073 07:17:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:58.073 07:17:31 -- common/autotest_common.sh@10 -- # set +x 00:16:59.009 07:17:32 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:59.009 [2024-02-13 07:17:32.672982] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:59.268 07:17:32 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:16:59.268 07:17:32 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:16:59.268 07:17:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:16:59.268 07:17:32 -- bdev/bdev_raid.sh@196 -- # return 0 00:16:59.268 07:17:32 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:16:59.268 07:17:32 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:59.268 07:17:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:16:59.268 07:17:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:16:59.268 07:17:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:16:59.268 07:17:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:16:59.268 07:17:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:16:59.268 07:17:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:16:59.268 07:17:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:16:59.268 07:17:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:16:59.268 07:17:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:16:59.268 07:17:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:59.268 07:17:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.526 07:17:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:16:59.526 "name": "Existed_Raid", 00:16:59.526 "uuid": "479a7333-baa9-4bcd-86c4-acc6e9e3fe6c", 00:16:59.526 "strip_size_kb": 0, 00:16:59.526 "state": "online", 00:16:59.526 "raid_level": "raid1", 00:16:59.526 "superblock": true, 00:16:59.526 "num_base_bdevs": 3, 00:16:59.526 "num_base_bdevs_discovered": 2, 00:16:59.526 "num_base_bdevs_operational": 2, 00:16:59.526 "base_bdevs_list": [ 00:16:59.526 { 00:16:59.526 "name": null, 00:16:59.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.526 "is_configured": false, 00:16:59.526 "data_offset": 2048, 00:16:59.526 "data_size": 63488 00:16:59.526 }, 00:16:59.526 { 00:16:59.526 "name": "BaseBdev2", 00:16:59.526 "uuid": "c16b519b-e1f7-4483-a75e-52df163c8a03", 00:16:59.526 "is_configured": true, 00:16:59.526 "data_offset": 2048, 00:16:59.526 "data_size": 63488 00:16:59.526 }, 00:16:59.526 { 00:16:59.526 "name": "BaseBdev3", 00:16:59.526 "uuid": "4b276bc7-14a7-4dfd-882b-955a7ea9dacf", 00:16:59.526 "is_configured": true, 00:16:59.526 "data_offset": 2048, 00:16:59.526 "data_size": 63488 00:16:59.526 } 00:16:59.526 ] 00:16:59.526 }' 00:16:59.526 07:17:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:16:59.526 07:17:33 -- common/autotest_common.sh@10 -- # set +x 00:17:00.094 07:17:33 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:00.094 07:17:33 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:00.094 07:17:33 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.094 07:17:33 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:00.352 07:17:33 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:00.352 07:17:33 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:00.352 07:17:33 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:00.611 [2024-02-13 07:17:34.081410] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:00.611 07:17:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:00.611 07:17:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:00.611 07:17:34 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.611 07:17:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:00.870 07:17:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:00.870 07:17:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:00.870 07:17:34 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:00.870 [2024-02-13 07:17:34.535543] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:00.870 [2024-02-13 07:17:34.535579] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.870 [2024-02-13 07:17:34.535668] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:01.129 [2024-02-13 07:17:34.609874] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:01.129 [2024-02-13 07:17:34.609922] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:17:01.129 07:17:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:01.129 07:17:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:01.129 07:17:34 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.129 07:17:34 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:01.388 07:17:34 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:01.388 07:17:34 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:01.388 07:17:34 -- bdev/bdev_raid.sh@287 -- # killprocess 121932 00:17:01.388 07:17:34 -- common/autotest_common.sh@924 -- # '[' -z 121932 ']' 00:17:01.388 07:17:34 -- common/autotest_common.sh@928 -- # kill -0 121932 00:17:01.388 07:17:34 -- common/autotest_common.sh@929 -- # uname 00:17:01.388 07:17:34 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:01.388 07:17:34 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 121932 00:17:01.388 killing process with pid 121932 00:17:01.388 07:17:34 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:17:01.388 07:17:34 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:17:01.388 07:17:34 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 121932' 00:17:01.388 07:17:34 -- common/autotest_common.sh@943 -- # kill 121932 00:17:01.388 07:17:34 -- common/autotest_common.sh@948 -- # wait 121932 00:17:01.388 [2024-02-13 07:17:34.891861] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:01.388 [2024-02-13 07:17:34.892031] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:02.325 ************************************ 00:17:02.325 END TEST raid_state_function_test_sb 00:17:02.325 ************************************ 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:02.325 00:17:02.325 real 0m12.879s 00:17:02.325 user 0m22.812s 00:17:02.325 sys 0m1.580s 00:17:02.325 07:17:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:02.325 07:17:35 -- common/autotest_common.sh@10 -- # set +x 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:17:02.325 07:17:35 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:17:02.325 07:17:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:02.325 07:17:35 -- common/autotest_common.sh@10 -- # set +x 00:17:02.325 ************************************ 00:17:02.325 START TEST raid_superblock_test 00:17:02.325 ************************************ 00:17:02.325 07:17:35 -- common/autotest_common.sh@1102 -- # raid_superblock_test raid1 3 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@357 -- # raid_pid=122339 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@358 -- # waitforlisten 122339 /var/tmp/spdk-raid.sock 00:17:02.325 07:17:35 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:02.325 07:17:35 -- common/autotest_common.sh@817 -- # '[' -z 122339 ']' 00:17:02.325 07:17:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:02.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:02.325 07:17:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:02.325 07:17:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:02.325 07:17:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:02.325 07:17:35 -- common/autotest_common.sh@10 -- # set +x 00:17:02.585 [2024-02-13 07:17:36.042122] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:17:02.585 [2024-02-13 07:17:36.042343] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid122339 ] 00:17:02.585 [2024-02-13 07:17:36.197578] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.926 [2024-02-13 07:17:36.388474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.926 [2024-02-13 07:17:36.569899] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.507 07:17:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:03.507 07:17:36 -- common/autotest_common.sh@850 -- # return 0 00:17:03.507 07:17:36 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:03.507 07:17:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:03.507 07:17:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:03.507 07:17:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:03.507 07:17:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:03.507 07:17:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:03.507 07:17:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:03.507 07:17:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:03.507 07:17:36 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:03.766 malloc1 00:17:03.766 07:17:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:03.766 [2024-02-13 07:17:37.444735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:03.766 [2024-02-13 07:17:37.444865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.766 [2024-02-13 07:17:37.444902] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:03.766 [2024-02-13 07:17:37.444953] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.766 [2024-02-13 07:17:37.447608] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.766 [2024-02-13 07:17:37.447679] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:03.766 pt1 00:17:04.024 07:17:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:04.024 07:17:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:04.024 07:17:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:04.024 07:17:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:04.024 07:17:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:04.024 07:17:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:04.024 07:17:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:04.024 07:17:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:04.024 07:17:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:04.285 malloc2 00:17:04.285 07:17:37 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:04.285 [2024-02-13 07:17:37.935436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:04.285 [2024-02-13 07:17:37.935798] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.285 [2024-02-13 07:17:37.935886] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:04.285 [2024-02-13 07:17:37.936100] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.285 [2024-02-13 07:17:37.938670] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.285 [2024-02-13 07:17:37.938869] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:04.285 pt2 00:17:04.285 07:17:37 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:04.285 07:17:37 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:04.285 07:17:37 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:04.285 07:17:37 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:04.285 07:17:37 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:04.285 07:17:37 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:04.285 07:17:37 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:04.285 07:17:37 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:04.285 07:17:37 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:04.544 malloc3 00:17:04.544 07:17:38 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:04.803 [2024-02-13 07:17:38.379184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:04.803 [2024-02-13 07:17:38.379458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:04.803 [2024-02-13 07:17:38.379539] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:04.803 [2024-02-13 07:17:38.379835] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:04.803 [2024-02-13 07:17:38.382033] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:04.803 [2024-02-13 07:17:38.382248] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:04.803 pt3 00:17:04.803 07:17:38 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:04.803 07:17:38 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:04.803 07:17:38 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:17:05.061 [2024-02-13 07:17:38.591255] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:05.061 [2024-02-13 07:17:38.593469] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:05.061 [2024-02-13 07:17:38.593689] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:05.061 [2024-02-13 07:17:38.593944] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:17:05.061 [2024-02-13 07:17:38.594054] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:05.061 [2024-02-13 07:17:38.594264] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:17:05.061 [2024-02-13 07:17:38.594817] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:17:05.061 [2024-02-13 07:17:38.595006] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:17:05.061 [2024-02-13 07:17:38.595335] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.061 07:17:38 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:05.061 07:17:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:05.061 07:17:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:05.061 07:17:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:05.061 07:17:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:05.061 07:17:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:05.061 07:17:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:05.061 07:17:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:05.061 07:17:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:05.061 07:17:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:05.061 07:17:38 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.061 07:17:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.320 07:17:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:05.320 "name": "raid_bdev1", 00:17:05.320 "uuid": "86b0fb8e-ce98-44d8-a386-761b32163ce7", 00:17:05.320 "strip_size_kb": 0, 00:17:05.320 "state": "online", 00:17:05.320 "raid_level": "raid1", 00:17:05.320 "superblock": true, 00:17:05.320 "num_base_bdevs": 3, 00:17:05.320 "num_base_bdevs_discovered": 3, 00:17:05.320 "num_base_bdevs_operational": 3, 00:17:05.320 "base_bdevs_list": [ 00:17:05.320 { 00:17:05.320 "name": "pt1", 00:17:05.320 "uuid": "743c9b2b-d2ab-5f05-8ad4-31f66f7e91c6", 00:17:05.320 "is_configured": true, 00:17:05.320 "data_offset": 2048, 00:17:05.320 "data_size": 63488 00:17:05.320 }, 00:17:05.320 { 00:17:05.320 "name": "pt2", 00:17:05.320 "uuid": "79204ce4-ba2c-568c-ba5d-5be703216b71", 00:17:05.320 "is_configured": true, 00:17:05.320 "data_offset": 2048, 00:17:05.320 "data_size": 63488 00:17:05.320 }, 00:17:05.320 { 00:17:05.320 "name": "pt3", 00:17:05.320 "uuid": "7e5c8e23-03af-56a4-8b15-822393209957", 00:17:05.320 "is_configured": true, 00:17:05.320 "data_offset": 2048, 00:17:05.320 "data_size": 63488 00:17:05.320 } 00:17:05.320 ] 00:17:05.320 }' 00:17:05.320 07:17:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:05.320 07:17:38 -- common/autotest_common.sh@10 -- # set +x 00:17:05.887 07:17:39 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:05.887 07:17:39 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:06.147 [2024-02-13 07:17:39.699745] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.147 07:17:39 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=86b0fb8e-ce98-44d8-a386-761b32163ce7 00:17:06.147 07:17:39 -- bdev/bdev_raid.sh@380 -- # '[' -z 86b0fb8e-ce98-44d8-a386-761b32163ce7 ']' 00:17:06.147 07:17:39 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:06.406 [2024-02-13 07:17:39.943539] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.406 [2024-02-13 07:17:39.943730] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.406 [2024-02-13 07:17:39.943919] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.406 [2024-02-13 07:17:39.944132] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.406 [2024-02-13 07:17:39.944237] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:17:06.406 07:17:39 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.406 07:17:39 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:06.664 07:17:40 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:06.664 07:17:40 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:06.664 07:17:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.664 07:17:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:06.924 07:17:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.924 07:17:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:06.924 07:17:40 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.924 07:17:40 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:07.182 07:17:40 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:07.182 07:17:40 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:07.441 07:17:41 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:07.441 07:17:41 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:07.441 07:17:41 -- common/autotest_common.sh@638 -- # local es=0 00:17:07.441 07:17:41 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:07.441 07:17:41 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.441 07:17:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:07.441 07:17:41 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.441 07:17:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:07.441 07:17:41 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.441 07:17:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:07.441 07:17:41 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:07.441 07:17:41 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:07.441 07:17:41 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:17:07.700 [2024-02-13 07:17:41.211764] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:07.700 [2024-02-13 07:17:41.214091] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:07.700 [2024-02-13 07:17:41.214357] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:07.700 [2024-02-13 07:17:41.214464] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:07.700 request: 00:17:07.700 { 00:17:07.700 "name": "raid_bdev1", 00:17:07.700 "raid_level": "raid1", 00:17:07.700 "base_bdevs": [ 00:17:07.700 "malloc1", 00:17:07.700 "malloc2", 00:17:07.700 "malloc3" 00:17:07.701 ], 00:17:07.701 "superblock": false, 00:17:07.701 "method": "bdev_raid_create", 00:17:07.701 "req_id": 1 00:17:07.701 } 00:17:07.701 Got JSON-RPC error response 00:17:07.701 response: 00:17:07.701 { 00:17:07.701 "code": -17, 00:17:07.701 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:07.701 } 00:17:07.701 [2024-02-13 07:17:41.214748] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:07.701 [2024-02-13 07:17:41.214821] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:07.701 [2024-02-13 07:17:41.214897] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.701 [2024-02-13 07:17:41.214930] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:17:07.701 07:17:41 -- common/autotest_common.sh@641 -- # es=1 00:17:07.701 07:17:41 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:07.701 07:17:41 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:07.701 07:17:41 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:07.701 07:17:41 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.701 07:17:41 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:07.959 07:17:41 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:07.960 07:17:41 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:07.960 07:17:41 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:07.960 [2024-02-13 07:17:41.611788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:07.960 [2024-02-13 07:17:41.612069] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.960 [2024-02-13 07:17:41.612145] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:07.960 [2024-02-13 07:17:41.612349] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.960 [2024-02-13 07:17:41.614676] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.960 [2024-02-13 07:17:41.614858] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:07.960 [2024-02-13 07:17:41.615063] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:07.960 [2024-02-13 07:17:41.615252] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:07.960 pt1 00:17:07.960 07:17:41 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:07.960 07:17:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:07.960 07:17:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:07.960 07:17:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:07.960 07:17:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:07.960 07:17:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:07.960 07:17:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:07.960 07:17:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:07.960 07:17:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:07.960 07:17:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:07.960 07:17:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.960 07:17:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.219 07:17:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:08.219 "name": "raid_bdev1", 00:17:08.219 "uuid": "86b0fb8e-ce98-44d8-a386-761b32163ce7", 00:17:08.219 "strip_size_kb": 0, 00:17:08.219 "state": "configuring", 00:17:08.219 "raid_level": "raid1", 00:17:08.219 "superblock": true, 00:17:08.219 "num_base_bdevs": 3, 00:17:08.219 "num_base_bdevs_discovered": 1, 00:17:08.219 "num_base_bdevs_operational": 3, 00:17:08.219 "base_bdevs_list": [ 00:17:08.219 { 00:17:08.219 "name": "pt1", 00:17:08.219 "uuid": "743c9b2b-d2ab-5f05-8ad4-31f66f7e91c6", 00:17:08.219 "is_configured": true, 00:17:08.219 "data_offset": 2048, 00:17:08.219 "data_size": 63488 00:17:08.219 }, 00:17:08.219 { 00:17:08.219 "name": null, 00:17:08.219 "uuid": "79204ce4-ba2c-568c-ba5d-5be703216b71", 00:17:08.219 "is_configured": false, 00:17:08.219 "data_offset": 2048, 00:17:08.219 "data_size": 63488 00:17:08.219 }, 00:17:08.219 { 00:17:08.219 "name": null, 00:17:08.219 "uuid": "7e5c8e23-03af-56a4-8b15-822393209957", 00:17:08.219 "is_configured": false, 00:17:08.219 "data_offset": 2048, 00:17:08.219 "data_size": 63488 00:17:08.219 } 00:17:08.219 ] 00:17:08.219 }' 00:17:08.219 07:17:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:08.219 07:17:41 -- common/autotest_common.sh@10 -- # set +x 00:17:09.155 07:17:42 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:17:09.155 07:17:42 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:09.155 [2024-02-13 07:17:42.700058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:09.155 [2024-02-13 07:17:42.700355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.155 [2024-02-13 07:17:42.700445] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:09.155 [2024-02-13 07:17:42.700621] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.155 [2024-02-13 07:17:42.701249] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.155 [2024-02-13 07:17:42.701461] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:09.155 [2024-02-13 07:17:42.701729] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:09.155 [2024-02-13 07:17:42.701856] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.155 pt2 00:17:09.155 07:17:42 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:09.414 [2024-02-13 07:17:42.904073] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:09.414 07:17:42 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:09.414 07:17:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:09.414 07:17:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:09.414 07:17:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:09.414 07:17:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:09.414 07:17:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:09.414 07:17:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:09.414 07:17:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:09.414 07:17:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:09.414 07:17:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:09.414 07:17:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:09.414 07:17:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.671 07:17:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:09.671 "name": "raid_bdev1", 00:17:09.671 "uuid": "86b0fb8e-ce98-44d8-a386-761b32163ce7", 00:17:09.671 "strip_size_kb": 0, 00:17:09.671 "state": "configuring", 00:17:09.671 "raid_level": "raid1", 00:17:09.671 "superblock": true, 00:17:09.671 "num_base_bdevs": 3, 00:17:09.671 "num_base_bdevs_discovered": 1, 00:17:09.671 "num_base_bdevs_operational": 3, 00:17:09.671 "base_bdevs_list": [ 00:17:09.671 { 00:17:09.671 "name": "pt1", 00:17:09.671 "uuid": "743c9b2b-d2ab-5f05-8ad4-31f66f7e91c6", 00:17:09.671 "is_configured": true, 00:17:09.671 "data_offset": 2048, 00:17:09.671 "data_size": 63488 00:17:09.671 }, 00:17:09.671 { 00:17:09.671 "name": null, 00:17:09.671 "uuid": "79204ce4-ba2c-568c-ba5d-5be703216b71", 00:17:09.671 "is_configured": false, 00:17:09.671 "data_offset": 2048, 00:17:09.671 "data_size": 63488 00:17:09.671 }, 00:17:09.671 { 00:17:09.671 "name": null, 00:17:09.671 "uuid": "7e5c8e23-03af-56a4-8b15-822393209957", 00:17:09.671 "is_configured": false, 00:17:09.671 "data_offset": 2048, 00:17:09.671 "data_size": 63488 00:17:09.672 } 00:17:09.672 ] 00:17:09.672 }' 00:17:09.672 07:17:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:09.672 07:17:43 -- common/autotest_common.sh@10 -- # set +x 00:17:10.239 07:17:43 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:17:10.239 07:17:43 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:10.239 07:17:43 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:10.498 [2024-02-13 07:17:44.068336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:10.498 [2024-02-13 07:17:44.068776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.498 [2024-02-13 07:17:44.068891] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:10.498 [2024-02-13 07:17:44.069113] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.498 [2024-02-13 07:17:44.069809] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.498 [2024-02-13 07:17:44.069995] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:10.498 [2024-02-13 07:17:44.070283] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:10.498 [2024-02-13 07:17:44.070418] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:10.498 pt2 00:17:10.498 07:17:44 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:10.498 07:17:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:10.498 07:17:44 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:10.757 [2024-02-13 07:17:44.312406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:10.757 [2024-02-13 07:17:44.312666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:10.757 [2024-02-13 07:17:44.312838] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:10.757 [2024-02-13 07:17:44.312961] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:10.757 [2024-02-13 07:17:44.313658] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:10.757 [2024-02-13 07:17:44.313833] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:10.757 [2024-02-13 07:17:44.314104] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:10.757 [2024-02-13 07:17:44.314268] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:10.757 [2024-02-13 07:17:44.314548] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:17:10.757 [2024-02-13 07:17:44.314674] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:10.757 [2024-02-13 07:17:44.314832] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:17:10.757 [2024-02-13 07:17:44.315306] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:17:10.757 [2024-02-13 07:17:44.315432] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:17:10.757 [2024-02-13 07:17:44.315675] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.757 pt3 00:17:10.757 07:17:44 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:17:10.757 07:17:44 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:17:10.757 07:17:44 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:10.757 07:17:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:10.757 07:17:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:10.757 07:17:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:10.757 07:17:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:10.757 07:17:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:10.757 07:17:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:10.757 07:17:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:10.757 07:17:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:10.757 07:17:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:10.757 07:17:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.757 07:17:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.015 07:17:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:11.015 "name": "raid_bdev1", 00:17:11.015 "uuid": "86b0fb8e-ce98-44d8-a386-761b32163ce7", 00:17:11.015 "strip_size_kb": 0, 00:17:11.015 "state": "online", 00:17:11.016 "raid_level": "raid1", 00:17:11.016 "superblock": true, 00:17:11.016 "num_base_bdevs": 3, 00:17:11.016 "num_base_bdevs_discovered": 3, 00:17:11.016 "num_base_bdevs_operational": 3, 00:17:11.016 "base_bdevs_list": [ 00:17:11.016 { 00:17:11.016 "name": "pt1", 00:17:11.016 "uuid": "743c9b2b-d2ab-5f05-8ad4-31f66f7e91c6", 00:17:11.016 "is_configured": true, 00:17:11.016 "data_offset": 2048, 00:17:11.016 "data_size": 63488 00:17:11.016 }, 00:17:11.016 { 00:17:11.016 "name": "pt2", 00:17:11.016 "uuid": "79204ce4-ba2c-568c-ba5d-5be703216b71", 00:17:11.016 "is_configured": true, 00:17:11.016 "data_offset": 2048, 00:17:11.016 "data_size": 63488 00:17:11.016 }, 00:17:11.016 { 00:17:11.016 "name": "pt3", 00:17:11.016 "uuid": "7e5c8e23-03af-56a4-8b15-822393209957", 00:17:11.016 "is_configured": true, 00:17:11.016 "data_offset": 2048, 00:17:11.016 "data_size": 63488 00:17:11.016 } 00:17:11.016 ] 00:17:11.016 }' 00:17:11.016 07:17:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:11.016 07:17:44 -- common/autotest_common.sh@10 -- # set +x 00:17:11.583 07:17:45 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:11.583 07:17:45 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:17:11.841 [2024-02-13 07:17:45.508911] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:11.841 07:17:45 -- bdev/bdev_raid.sh@430 -- # '[' 86b0fb8e-ce98-44d8-a386-761b32163ce7 '!=' 86b0fb8e-ce98-44d8-a386-761b32163ce7 ']' 00:17:11.841 07:17:45 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:17:11.841 07:17:45 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:11.841 07:17:45 -- bdev/bdev_raid.sh@196 -- # return 0 00:17:11.841 07:17:45 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:12.099 [2024-02-13 07:17:45.760778] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:12.099 07:17:45 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:12.100 07:17:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:12.100 07:17:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:12.100 07:17:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:12.100 07:17:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:12.100 07:17:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:12.100 07:17:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:12.100 07:17:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:12.100 07:17:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:12.100 07:17:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:12.100 07:17:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:12.100 07:17:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.358 07:17:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:12.358 "name": "raid_bdev1", 00:17:12.358 "uuid": "86b0fb8e-ce98-44d8-a386-761b32163ce7", 00:17:12.358 "strip_size_kb": 0, 00:17:12.358 "state": "online", 00:17:12.358 "raid_level": "raid1", 00:17:12.358 "superblock": true, 00:17:12.358 "num_base_bdevs": 3, 00:17:12.358 "num_base_bdevs_discovered": 2, 00:17:12.358 "num_base_bdevs_operational": 2, 00:17:12.358 "base_bdevs_list": [ 00:17:12.358 { 00:17:12.358 "name": null, 00:17:12.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.358 "is_configured": false, 00:17:12.358 "data_offset": 2048, 00:17:12.358 "data_size": 63488 00:17:12.358 }, 00:17:12.358 { 00:17:12.358 "name": "pt2", 00:17:12.358 "uuid": "79204ce4-ba2c-568c-ba5d-5be703216b71", 00:17:12.358 "is_configured": true, 00:17:12.358 "data_offset": 2048, 00:17:12.358 "data_size": 63488 00:17:12.358 }, 00:17:12.358 { 00:17:12.358 "name": "pt3", 00:17:12.358 "uuid": "7e5c8e23-03af-56a4-8b15-822393209957", 00:17:12.358 "is_configured": true, 00:17:12.358 "data_offset": 2048, 00:17:12.358 "data_size": 63488 00:17:12.358 } 00:17:12.358 ] 00:17:12.358 }' 00:17:12.358 07:17:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:12.358 07:17:45 -- common/autotest_common.sh@10 -- # set +x 00:17:13.292 07:17:46 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:13.292 [2024-02-13 07:17:46.924936] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.292 [2024-02-13 07:17:46.925165] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.292 [2024-02-13 07:17:46.925343] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.292 [2024-02-13 07:17:46.925563] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.292 [2024-02-13 07:17:46.925686] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:17:13.292 07:17:46 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.292 07:17:46 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:17:13.551 07:17:47 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:17:13.551 07:17:47 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:17:13.551 07:17:47 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:17:13.551 07:17:47 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:13.551 07:17:47 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:13.809 07:17:47 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:13.809 07:17:47 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:13.809 07:17:47 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:14.068 07:17:47 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:17:14.068 07:17:47 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:17:14.068 07:17:47 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:17:14.068 07:17:47 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:14.068 07:17:47 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:14.327 [2024-02-13 07:17:47.773062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:14.327 [2024-02-13 07:17:47.773367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.327 [2024-02-13 07:17:47.773447] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:14.327 [2024-02-13 07:17:47.773661] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.327 [2024-02-13 07:17:47.776096] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.327 [2024-02-13 07:17:47.776272] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:14.327 [2024-02-13 07:17:47.776543] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:14.327 [2024-02-13 07:17:47.776714] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:14.327 pt2 00:17:14.327 07:17:47 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:14.327 07:17:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:14.327 07:17:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:14.327 07:17:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:14.327 07:17:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:14.327 07:17:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:14.327 07:17:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:14.327 07:17:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:14.327 07:17:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:14.327 07:17:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:14.327 07:17:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.327 07:17:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.327 07:17:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:14.327 "name": "raid_bdev1", 00:17:14.327 "uuid": "86b0fb8e-ce98-44d8-a386-761b32163ce7", 00:17:14.327 "strip_size_kb": 0, 00:17:14.327 "state": "configuring", 00:17:14.327 "raid_level": "raid1", 00:17:14.327 "superblock": true, 00:17:14.327 "num_base_bdevs": 3, 00:17:14.327 "num_base_bdevs_discovered": 1, 00:17:14.327 "num_base_bdevs_operational": 2, 00:17:14.327 "base_bdevs_list": [ 00:17:14.327 { 00:17:14.327 "name": null, 00:17:14.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.327 "is_configured": false, 00:17:14.327 "data_offset": 2048, 00:17:14.327 "data_size": 63488 00:17:14.327 }, 00:17:14.327 { 00:17:14.327 "name": "pt2", 00:17:14.327 "uuid": "79204ce4-ba2c-568c-ba5d-5be703216b71", 00:17:14.327 "is_configured": true, 00:17:14.327 "data_offset": 2048, 00:17:14.327 "data_size": 63488 00:17:14.327 }, 00:17:14.327 { 00:17:14.327 "name": null, 00:17:14.327 "uuid": "7e5c8e23-03af-56a4-8b15-822393209957", 00:17:14.327 "is_configured": false, 00:17:14.327 "data_offset": 2048, 00:17:14.327 "data_size": 63488 00:17:14.327 } 00:17:14.327 ] 00:17:14.327 }' 00:17:14.327 07:17:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:14.327 07:17:48 -- common/autotest_common.sh@10 -- # set +x 00:17:15.276 07:17:48 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:17:15.276 07:17:48 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:17:15.276 07:17:48 -- bdev/bdev_raid.sh@462 -- # i=2 00:17:15.276 07:17:48 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:15.276 [2024-02-13 07:17:48.885402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:15.276 [2024-02-13 07:17:48.885673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:15.276 [2024-02-13 07:17:48.885854] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:15.276 [2024-02-13 07:17:48.885986] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:15.276 [2024-02-13 07:17:48.886613] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:15.276 [2024-02-13 07:17:48.886772] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:15.276 [2024-02-13 07:17:48.886995] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:15.276 [2024-02-13 07:17:48.887130] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:15.276 [2024-02-13 07:17:48.887302] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:17:15.276 [2024-02-13 07:17:48.887416] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:15.276 [2024-02-13 07:17:48.887580] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:15.276 [2024-02-13 07:17:48.888041] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:17:15.276 [2024-02-13 07:17:48.888172] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:17:15.276 [2024-02-13 07:17:48.888417] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.276 pt3 00:17:15.276 07:17:48 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:15.276 07:17:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:15.276 07:17:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:15.276 07:17:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:15.276 07:17:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:15.276 07:17:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:15.276 07:17:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:15.276 07:17:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:15.276 07:17:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:15.276 07:17:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:15.276 07:17:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.276 07:17:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.542 07:17:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:15.542 "name": "raid_bdev1", 00:17:15.542 "uuid": "86b0fb8e-ce98-44d8-a386-761b32163ce7", 00:17:15.542 "strip_size_kb": 0, 00:17:15.542 "state": "online", 00:17:15.542 "raid_level": "raid1", 00:17:15.542 "superblock": true, 00:17:15.542 "num_base_bdevs": 3, 00:17:15.542 "num_base_bdevs_discovered": 2, 00:17:15.542 "num_base_bdevs_operational": 2, 00:17:15.542 "base_bdevs_list": [ 00:17:15.542 { 00:17:15.542 "name": null, 00:17:15.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.542 "is_configured": false, 00:17:15.542 "data_offset": 2048, 00:17:15.542 "data_size": 63488 00:17:15.542 }, 00:17:15.542 { 00:17:15.542 "name": "pt2", 00:17:15.542 "uuid": "79204ce4-ba2c-568c-ba5d-5be703216b71", 00:17:15.542 "is_configured": true, 00:17:15.542 "data_offset": 2048, 00:17:15.542 "data_size": 63488 00:17:15.542 }, 00:17:15.542 { 00:17:15.542 "name": "pt3", 00:17:15.542 "uuid": "7e5c8e23-03af-56a4-8b15-822393209957", 00:17:15.542 "is_configured": true, 00:17:15.542 "data_offset": 2048, 00:17:15.542 "data_size": 63488 00:17:15.542 } 00:17:15.542 ] 00:17:15.542 }' 00:17:15.542 07:17:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:15.542 07:17:49 -- common/autotest_common.sh@10 -- # set +x 00:17:16.478 07:17:49 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:17:16.478 07:17:49 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:16.478 [2024-02-13 07:17:50.105746] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:16.478 [2024-02-13 07:17:50.105924] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:16.478 [2024-02-13 07:17:50.106122] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.478 [2024-02-13 07:17:50.106296] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.478 [2024-02-13 07:17:50.106422] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:17:16.478 07:17:50 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.478 07:17:50 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:17:16.737 07:17:50 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:17:16.737 07:17:50 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:17:16.737 07:17:50 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:16.996 [2024-02-13 07:17:50.569838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:16.996 [2024-02-13 07:17:50.570099] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.996 [2024-02-13 07:17:50.570175] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:16.996 [2024-02-13 07:17:50.570454] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.996 [2024-02-13 07:17:50.572592] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.996 [2024-02-13 07:17:50.572773] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:16.996 [2024-02-13 07:17:50.573015] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:16.996 [2024-02-13 07:17:50.573204] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:16.996 pt1 00:17:16.996 07:17:50 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:17:16.996 07:17:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:16.996 07:17:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:16.996 07:17:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:16.996 07:17:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:16.996 07:17:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:16.996 07:17:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:16.996 07:17:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:16.996 07:17:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:16.996 07:17:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:16.996 07:17:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:16.996 07:17:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.255 07:17:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:17.255 "name": "raid_bdev1", 00:17:17.255 "uuid": "86b0fb8e-ce98-44d8-a386-761b32163ce7", 00:17:17.256 "strip_size_kb": 0, 00:17:17.256 "state": "configuring", 00:17:17.256 "raid_level": "raid1", 00:17:17.256 "superblock": true, 00:17:17.256 "num_base_bdevs": 3, 00:17:17.256 "num_base_bdevs_discovered": 1, 00:17:17.256 "num_base_bdevs_operational": 3, 00:17:17.256 "base_bdevs_list": [ 00:17:17.256 { 00:17:17.256 "name": "pt1", 00:17:17.256 "uuid": "743c9b2b-d2ab-5f05-8ad4-31f66f7e91c6", 00:17:17.256 "is_configured": true, 00:17:17.256 "data_offset": 2048, 00:17:17.256 "data_size": 63488 00:17:17.256 }, 00:17:17.256 { 00:17:17.256 "name": null, 00:17:17.256 "uuid": "79204ce4-ba2c-568c-ba5d-5be703216b71", 00:17:17.256 "is_configured": false, 00:17:17.256 "data_offset": 2048, 00:17:17.256 "data_size": 63488 00:17:17.256 }, 00:17:17.256 { 00:17:17.256 "name": null, 00:17:17.256 "uuid": "7e5c8e23-03af-56a4-8b15-822393209957", 00:17:17.256 "is_configured": false, 00:17:17.256 "data_offset": 2048, 00:17:17.256 "data_size": 63488 00:17:17.256 } 00:17:17.256 ] 00:17:17.256 }' 00:17:17.256 07:17:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:17.256 07:17:50 -- common/autotest_common.sh@10 -- # set +x 00:17:17.823 07:17:51 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:17:17.823 07:17:51 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:17.823 07:17:51 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:18.082 07:17:51 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:18.082 07:17:51 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:18.082 07:17:51 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:18.341 07:17:51 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:17:18.341 07:17:51 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:17:18.341 07:17:51 -- bdev/bdev_raid.sh@489 -- # i=2 00:17:18.341 07:17:51 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:18.600 [2024-02-13 07:17:52.162202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:18.600 [2024-02-13 07:17:52.162513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.600 [2024-02-13 07:17:52.162593] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:18.601 [2024-02-13 07:17:52.162844] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.601 [2024-02-13 07:17:52.163523] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.601 [2024-02-13 07:17:52.163714] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:18.601 [2024-02-13 07:17:52.163937] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:17:18.601 [2024-02-13 07:17:52.164055] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:18.601 [2024-02-13 07:17:52.164153] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:18.601 [2024-02-13 07:17:52.164293] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:17:18.601 [2024-02-13 07:17:52.164487] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:18.601 pt3 00:17:18.601 07:17:52 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:18.601 07:17:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:18.601 07:17:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:18.601 07:17:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:18.601 07:17:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:18.601 07:17:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:18.601 07:17:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:18.601 07:17:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:18.601 07:17:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:18.601 07:17:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:18.601 07:17:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.601 07:17:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.859 07:17:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:18.860 "name": "raid_bdev1", 00:17:18.860 "uuid": "86b0fb8e-ce98-44d8-a386-761b32163ce7", 00:17:18.860 "strip_size_kb": 0, 00:17:18.860 "state": "configuring", 00:17:18.860 "raid_level": "raid1", 00:17:18.860 "superblock": true, 00:17:18.860 "num_base_bdevs": 3, 00:17:18.860 "num_base_bdevs_discovered": 1, 00:17:18.860 "num_base_bdevs_operational": 2, 00:17:18.860 "base_bdevs_list": [ 00:17:18.860 { 00:17:18.860 "name": null, 00:17:18.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.860 "is_configured": false, 00:17:18.860 "data_offset": 2048, 00:17:18.860 "data_size": 63488 00:17:18.860 }, 00:17:18.860 { 00:17:18.860 "name": null, 00:17:18.860 "uuid": "79204ce4-ba2c-568c-ba5d-5be703216b71", 00:17:18.860 "is_configured": false, 00:17:18.860 "data_offset": 2048, 00:17:18.860 "data_size": 63488 00:17:18.860 }, 00:17:18.860 { 00:17:18.860 "name": "pt3", 00:17:18.860 "uuid": "7e5c8e23-03af-56a4-8b15-822393209957", 00:17:18.860 "is_configured": true, 00:17:18.860 "data_offset": 2048, 00:17:18.860 "data_size": 63488 00:17:18.860 } 00:17:18.860 ] 00:17:18.860 }' 00:17:18.860 07:17:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:18.860 07:17:52 -- common/autotest_common.sh@10 -- # set +x 00:17:19.796 07:17:53 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:17:19.796 07:17:53 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:19.796 07:17:53 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:19.797 [2024-02-13 07:17:53.322530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:19.797 [2024-02-13 07:17:53.322865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.797 [2024-02-13 07:17:53.322938] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:19.797 [2024-02-13 07:17:53.323197] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.797 [2024-02-13 07:17:53.323876] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.797 [2024-02-13 07:17:53.324057] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:19.797 [2024-02-13 07:17:53.324283] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:17:19.797 [2024-02-13 07:17:53.324416] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:19.797 [2024-02-13 07:17:53.324696] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:17:19.797 [2024-02-13 07:17:53.324811] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:17:19.797 [2024-02-13 07:17:53.324996] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:19.797 [2024-02-13 07:17:53.325593] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:17:19.797 [2024-02-13 07:17:53.325722] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:17:19.797 [2024-02-13 07:17:53.325961] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.797 pt2 00:17:19.797 07:17:53 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:17:19.797 07:17:53 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:17:19.797 07:17:53 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:19.797 07:17:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:19.797 07:17:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:19.797 07:17:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:17:19.797 07:17:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:17:19.797 07:17:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:17:19.797 07:17:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:19.797 07:17:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:19.797 07:17:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:19.797 07:17:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:19.797 07:17:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:19.797 07:17:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.055 07:17:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:20.055 "name": "raid_bdev1", 00:17:20.055 "uuid": "86b0fb8e-ce98-44d8-a386-761b32163ce7", 00:17:20.055 "strip_size_kb": 0, 00:17:20.055 "state": "online", 00:17:20.055 "raid_level": "raid1", 00:17:20.055 "superblock": true, 00:17:20.055 "num_base_bdevs": 3, 00:17:20.055 "num_base_bdevs_discovered": 2, 00:17:20.055 "num_base_bdevs_operational": 2, 00:17:20.055 "base_bdevs_list": [ 00:17:20.055 { 00:17:20.055 "name": null, 00:17:20.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.055 "is_configured": false, 00:17:20.055 "data_offset": 2048, 00:17:20.055 "data_size": 63488 00:17:20.055 }, 00:17:20.055 { 00:17:20.055 "name": "pt2", 00:17:20.055 "uuid": "79204ce4-ba2c-568c-ba5d-5be703216b71", 00:17:20.055 "is_configured": true, 00:17:20.055 "data_offset": 2048, 00:17:20.055 "data_size": 63488 00:17:20.056 }, 00:17:20.056 { 00:17:20.056 "name": "pt3", 00:17:20.056 "uuid": "7e5c8e23-03af-56a4-8b15-822393209957", 00:17:20.056 "is_configured": true, 00:17:20.056 "data_offset": 2048, 00:17:20.056 "data_size": 63488 00:17:20.056 } 00:17:20.056 ] 00:17:20.056 }' 00:17:20.056 07:17:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:20.056 07:17:53 -- common/autotest_common.sh@10 -- # set +x 00:17:20.623 07:17:54 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:20.624 07:17:54 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:17:20.883 [2024-02-13 07:17:54.450955] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.883 07:17:54 -- bdev/bdev_raid.sh@506 -- # '[' 86b0fb8e-ce98-44d8-a386-761b32163ce7 '!=' 86b0fb8e-ce98-44d8-a386-761b32163ce7 ']' 00:17:20.883 07:17:54 -- bdev/bdev_raid.sh@511 -- # killprocess 122339 00:17:20.883 07:17:54 -- common/autotest_common.sh@924 -- # '[' -z 122339 ']' 00:17:20.883 07:17:54 -- common/autotest_common.sh@928 -- # kill -0 122339 00:17:20.883 07:17:54 -- common/autotest_common.sh@929 -- # uname 00:17:20.883 07:17:54 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:20.883 07:17:54 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 122339 00:17:20.883 killing process with pid 122339 00:17:20.883 07:17:54 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:17:20.883 07:17:54 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:17:20.883 07:17:54 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 122339' 00:17:20.883 07:17:54 -- common/autotest_common.sh@943 -- # kill 122339 00:17:20.883 07:17:54 -- common/autotest_common.sh@948 -- # wait 122339 00:17:20.883 [2024-02-13 07:17:54.487741] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:20.883 [2024-02-13 07:17:54.487839] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.883 [2024-02-13 07:17:54.487951] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.883 [2024-02-13 07:17:54.488107] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:17:21.141 [2024-02-13 07:17:54.705287] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.077 ************************************ 00:17:22.077 END TEST raid_superblock_test 00:17:22.077 ************************************ 00:17:22.077 07:17:55 -- bdev/bdev_raid.sh@513 -- # return 0 00:17:22.077 00:17:22.077 real 0m19.756s 00:17:22.077 user 0m36.470s 00:17:22.077 sys 0m2.218s 00:17:22.077 07:17:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:22.077 07:17:55 -- common/autotest_common.sh@10 -- # set +x 00:17:22.335 07:17:55 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:17:22.335 07:17:55 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:17:22.335 07:17:55 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:17:22.335 07:17:55 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:17:22.335 07:17:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:22.335 07:17:55 -- common/autotest_common.sh@10 -- # set +x 00:17:22.335 ************************************ 00:17:22.335 START TEST raid_state_function_test 00:17:22.335 ************************************ 00:17:22.335 07:17:55 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid0 4 false 00:17:22.335 07:17:55 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:22.335 07:17:55 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:22.335 07:17:55 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:17:22.335 07:17:55 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:22.335 07:17:55 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:22.335 07:17:55 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:22.335 07:17:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:22.335 07:17:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:22.335 07:17:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:22.335 07:17:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:22.335 07:17:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:22.335 07:17:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:22.335 07:17:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:22.335 07:17:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@226 -- # raid_pid=122988 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 122988' 00:17:22.336 Process raid pid: 122988 00:17:22.336 07:17:55 -- bdev/bdev_raid.sh@228 -- # waitforlisten 122988 /var/tmp/spdk-raid.sock 00:17:22.336 07:17:55 -- common/autotest_common.sh@817 -- # '[' -z 122988 ']' 00:17:22.336 07:17:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:22.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:22.336 07:17:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:22.336 07:17:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:22.336 07:17:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:22.336 07:17:55 -- common/autotest_common.sh@10 -- # set +x 00:17:22.336 [2024-02-13 07:17:55.875831] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:17:22.336 [2024-02-13 07:17:55.876239] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.594 [2024-02-13 07:17:56.033024] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.594 [2024-02-13 07:17:56.233986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.853 [2024-02-13 07:17:56.426057] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.421 07:17:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:23.421 07:17:56 -- common/autotest_common.sh@850 -- # return 0 00:17:23.421 07:17:56 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:23.421 [2024-02-13 07:17:57.058891] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:23.421 [2024-02-13 07:17:57.059138] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:23.421 [2024-02-13 07:17:57.059251] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:23.421 [2024-02-13 07:17:57.059312] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:23.421 [2024-02-13 07:17:57.059486] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:23.421 [2024-02-13 07:17:57.059564] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:23.421 [2024-02-13 07:17:57.059596] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:23.421 [2024-02-13 07:17:57.059638] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:23.421 07:17:57 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:23.421 07:17:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:23.421 07:17:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:23.421 07:17:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:23.421 07:17:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:23.421 07:17:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:23.421 07:17:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:23.421 07:17:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:23.421 07:17:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:23.421 07:17:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:23.421 07:17:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.421 07:17:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:23.680 07:17:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:23.680 "name": "Existed_Raid", 00:17:23.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.680 "strip_size_kb": 64, 00:17:23.680 "state": "configuring", 00:17:23.680 "raid_level": "raid0", 00:17:23.680 "superblock": false, 00:17:23.680 "num_base_bdevs": 4, 00:17:23.680 "num_base_bdevs_discovered": 0, 00:17:23.680 "num_base_bdevs_operational": 4, 00:17:23.680 "base_bdevs_list": [ 00:17:23.680 { 00:17:23.680 "name": "BaseBdev1", 00:17:23.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.680 "is_configured": false, 00:17:23.680 "data_offset": 0, 00:17:23.680 "data_size": 0 00:17:23.680 }, 00:17:23.680 { 00:17:23.680 "name": "BaseBdev2", 00:17:23.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.680 "is_configured": false, 00:17:23.680 "data_offset": 0, 00:17:23.680 "data_size": 0 00:17:23.680 }, 00:17:23.680 { 00:17:23.680 "name": "BaseBdev3", 00:17:23.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.680 "is_configured": false, 00:17:23.680 "data_offset": 0, 00:17:23.680 "data_size": 0 00:17:23.680 }, 00:17:23.680 { 00:17:23.680 "name": "BaseBdev4", 00:17:23.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.680 "is_configured": false, 00:17:23.680 "data_offset": 0, 00:17:23.680 "data_size": 0 00:17:23.680 } 00:17:23.680 ] 00:17:23.680 }' 00:17:23.680 07:17:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:23.680 07:17:57 -- common/autotest_common.sh@10 -- # set +x 00:17:24.617 07:17:57 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:24.617 [2024-02-13 07:17:58.191051] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:24.617 [2024-02-13 07:17:58.191435] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:24.617 07:17:58 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:24.876 [2024-02-13 07:17:58.443175] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:24.877 [2024-02-13 07:17:58.443443] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:24.877 [2024-02-13 07:17:58.443636] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:24.877 [2024-02-13 07:17:58.443701] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:24.877 [2024-02-13 07:17:58.443822] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:24.877 [2024-02-13 07:17:58.443958] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:24.877 [2024-02-13 07:17:58.444052] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:24.877 [2024-02-13 07:17:58.444134] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:24.877 07:17:58 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:25.136 [2024-02-13 07:17:58.726167] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.136 BaseBdev1 00:17:25.136 07:17:58 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:25.136 07:17:58 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:25.136 07:17:58 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:25.136 07:17:58 -- common/autotest_common.sh@887 -- # local i 00:17:25.136 07:17:58 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:25.136 07:17:58 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:25.136 07:17:58 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:25.395 07:17:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:25.654 [ 00:17:25.654 { 00:17:25.654 "name": "BaseBdev1", 00:17:25.654 "aliases": [ 00:17:25.654 "2606531f-0eda-4830-a55a-9594b23416a2" 00:17:25.654 ], 00:17:25.654 "product_name": "Malloc disk", 00:17:25.654 "block_size": 512, 00:17:25.654 "num_blocks": 65536, 00:17:25.654 "uuid": "2606531f-0eda-4830-a55a-9594b23416a2", 00:17:25.654 "assigned_rate_limits": { 00:17:25.654 "rw_ios_per_sec": 0, 00:17:25.654 "rw_mbytes_per_sec": 0, 00:17:25.654 "r_mbytes_per_sec": 0, 00:17:25.654 "w_mbytes_per_sec": 0 00:17:25.654 }, 00:17:25.654 "claimed": true, 00:17:25.654 "claim_type": "exclusive_write", 00:17:25.654 "zoned": false, 00:17:25.654 "supported_io_types": { 00:17:25.654 "read": true, 00:17:25.654 "write": true, 00:17:25.654 "unmap": true, 00:17:25.654 "write_zeroes": true, 00:17:25.654 "flush": true, 00:17:25.654 "reset": true, 00:17:25.654 "compare": false, 00:17:25.654 "compare_and_write": false, 00:17:25.654 "abort": true, 00:17:25.654 "nvme_admin": false, 00:17:25.654 "nvme_io": false 00:17:25.654 }, 00:17:25.654 "memory_domains": [ 00:17:25.654 { 00:17:25.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.654 "dma_device_type": 2 00:17:25.654 } 00:17:25.654 ], 00:17:25.654 "driver_specific": {} 00:17:25.654 } 00:17:25.654 ] 00:17:25.654 07:17:59 -- common/autotest_common.sh@893 -- # return 0 00:17:25.654 07:17:59 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:25.654 07:17:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:25.654 07:17:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:25.654 07:17:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:25.654 07:17:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:25.655 07:17:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:25.655 07:17:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:25.655 07:17:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:25.655 07:17:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:25.655 07:17:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:25.655 07:17:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.655 07:17:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:25.913 07:17:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:25.913 "name": "Existed_Raid", 00:17:25.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.913 "strip_size_kb": 64, 00:17:25.913 "state": "configuring", 00:17:25.913 "raid_level": "raid0", 00:17:25.913 "superblock": false, 00:17:25.913 "num_base_bdevs": 4, 00:17:25.913 "num_base_bdevs_discovered": 1, 00:17:25.913 "num_base_bdevs_operational": 4, 00:17:25.913 "base_bdevs_list": [ 00:17:25.913 { 00:17:25.913 "name": "BaseBdev1", 00:17:25.913 "uuid": "2606531f-0eda-4830-a55a-9594b23416a2", 00:17:25.913 "is_configured": true, 00:17:25.913 "data_offset": 0, 00:17:25.913 "data_size": 65536 00:17:25.913 }, 00:17:25.913 { 00:17:25.913 "name": "BaseBdev2", 00:17:25.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.913 "is_configured": false, 00:17:25.913 "data_offset": 0, 00:17:25.913 "data_size": 0 00:17:25.913 }, 00:17:25.913 { 00:17:25.913 "name": "BaseBdev3", 00:17:25.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.913 "is_configured": false, 00:17:25.913 "data_offset": 0, 00:17:25.913 "data_size": 0 00:17:25.913 }, 00:17:25.913 { 00:17:25.913 "name": "BaseBdev4", 00:17:25.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.913 "is_configured": false, 00:17:25.913 "data_offset": 0, 00:17:25.913 "data_size": 0 00:17:25.913 } 00:17:25.913 ] 00:17:25.913 }' 00:17:25.913 07:17:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:25.913 07:17:59 -- common/autotest_common.sh@10 -- # set +x 00:17:26.481 07:18:00 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:26.739 [2024-02-13 07:18:00.214529] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:26.740 [2024-02-13 07:18:00.214749] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:26.740 07:18:00 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:17:26.740 07:18:00 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:26.999 [2024-02-13 07:18:00.454657] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:26.999 [2024-02-13 07:18:00.456841] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.999 [2024-02-13 07:18:00.457096] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.999 [2024-02-13 07:18:00.457249] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:26.999 [2024-02-13 07:18:00.457375] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:26.999 [2024-02-13 07:18:00.457517] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:26.999 [2024-02-13 07:18:00.457572] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:26.999 07:18:00 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:26.999 07:18:00 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:26.999 07:18:00 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:26.999 07:18:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:26.999 07:18:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:26.999 07:18:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:26.999 07:18:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:26.999 07:18:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:26.999 07:18:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:26.999 07:18:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:26.999 07:18:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:26.999 07:18:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:26.999 07:18:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.999 07:18:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:27.258 07:18:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:27.258 "name": "Existed_Raid", 00:17:27.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.258 "strip_size_kb": 64, 00:17:27.258 "state": "configuring", 00:17:27.258 "raid_level": "raid0", 00:17:27.258 "superblock": false, 00:17:27.258 "num_base_bdevs": 4, 00:17:27.258 "num_base_bdevs_discovered": 1, 00:17:27.258 "num_base_bdevs_operational": 4, 00:17:27.258 "base_bdevs_list": [ 00:17:27.258 { 00:17:27.258 "name": "BaseBdev1", 00:17:27.258 "uuid": "2606531f-0eda-4830-a55a-9594b23416a2", 00:17:27.258 "is_configured": true, 00:17:27.258 "data_offset": 0, 00:17:27.258 "data_size": 65536 00:17:27.258 }, 00:17:27.258 { 00:17:27.258 "name": "BaseBdev2", 00:17:27.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.258 "is_configured": false, 00:17:27.258 "data_offset": 0, 00:17:27.258 "data_size": 0 00:17:27.258 }, 00:17:27.258 { 00:17:27.258 "name": "BaseBdev3", 00:17:27.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.258 "is_configured": false, 00:17:27.258 "data_offset": 0, 00:17:27.258 "data_size": 0 00:17:27.258 }, 00:17:27.258 { 00:17:27.258 "name": "BaseBdev4", 00:17:27.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.258 "is_configured": false, 00:17:27.258 "data_offset": 0, 00:17:27.258 "data_size": 0 00:17:27.258 } 00:17:27.258 ] 00:17:27.258 }' 00:17:27.258 07:18:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:27.258 07:18:00 -- common/autotest_common.sh@10 -- # set +x 00:17:27.826 07:18:01 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:28.085 [2024-02-13 07:18:01.566246] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:28.085 BaseBdev2 00:17:28.085 07:18:01 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:28.085 07:18:01 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:17:28.085 07:18:01 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:28.085 07:18:01 -- common/autotest_common.sh@887 -- # local i 00:17:28.085 07:18:01 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:28.085 07:18:01 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:28.085 07:18:01 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:28.344 07:18:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:28.603 [ 00:17:28.603 { 00:17:28.603 "name": "BaseBdev2", 00:17:28.603 "aliases": [ 00:17:28.603 "6283506c-0d73-47af-b96f-6bd139be3948" 00:17:28.603 ], 00:17:28.603 "product_name": "Malloc disk", 00:17:28.603 "block_size": 512, 00:17:28.603 "num_blocks": 65536, 00:17:28.603 "uuid": "6283506c-0d73-47af-b96f-6bd139be3948", 00:17:28.603 "assigned_rate_limits": { 00:17:28.603 "rw_ios_per_sec": 0, 00:17:28.603 "rw_mbytes_per_sec": 0, 00:17:28.603 "r_mbytes_per_sec": 0, 00:17:28.603 "w_mbytes_per_sec": 0 00:17:28.603 }, 00:17:28.603 "claimed": true, 00:17:28.603 "claim_type": "exclusive_write", 00:17:28.603 "zoned": false, 00:17:28.603 "supported_io_types": { 00:17:28.603 "read": true, 00:17:28.603 "write": true, 00:17:28.603 "unmap": true, 00:17:28.603 "write_zeroes": true, 00:17:28.603 "flush": true, 00:17:28.603 "reset": true, 00:17:28.603 "compare": false, 00:17:28.603 "compare_and_write": false, 00:17:28.603 "abort": true, 00:17:28.603 "nvme_admin": false, 00:17:28.603 "nvme_io": false 00:17:28.603 }, 00:17:28.603 "memory_domains": [ 00:17:28.603 { 00:17:28.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.603 "dma_device_type": 2 00:17:28.603 } 00:17:28.603 ], 00:17:28.603 "driver_specific": {} 00:17:28.603 } 00:17:28.603 ] 00:17:28.603 07:18:02 -- common/autotest_common.sh@893 -- # return 0 00:17:28.603 07:18:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:28.603 07:18:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:28.603 07:18:02 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:28.603 07:18:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:28.603 07:18:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:28.603 07:18:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:28.603 07:18:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:28.603 07:18:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:28.603 07:18:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:28.603 07:18:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:28.603 07:18:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:28.603 07:18:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:28.603 07:18:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.603 07:18:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.862 07:18:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:28.862 "name": "Existed_Raid", 00:17:28.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.862 "strip_size_kb": 64, 00:17:28.862 "state": "configuring", 00:17:28.862 "raid_level": "raid0", 00:17:28.862 "superblock": false, 00:17:28.862 "num_base_bdevs": 4, 00:17:28.863 "num_base_bdevs_discovered": 2, 00:17:28.863 "num_base_bdevs_operational": 4, 00:17:28.863 "base_bdevs_list": [ 00:17:28.863 { 00:17:28.863 "name": "BaseBdev1", 00:17:28.863 "uuid": "2606531f-0eda-4830-a55a-9594b23416a2", 00:17:28.863 "is_configured": true, 00:17:28.863 "data_offset": 0, 00:17:28.863 "data_size": 65536 00:17:28.863 }, 00:17:28.863 { 00:17:28.863 "name": "BaseBdev2", 00:17:28.863 "uuid": "6283506c-0d73-47af-b96f-6bd139be3948", 00:17:28.863 "is_configured": true, 00:17:28.863 "data_offset": 0, 00:17:28.863 "data_size": 65536 00:17:28.863 }, 00:17:28.863 { 00:17:28.863 "name": "BaseBdev3", 00:17:28.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.863 "is_configured": false, 00:17:28.863 "data_offset": 0, 00:17:28.863 "data_size": 0 00:17:28.863 }, 00:17:28.863 { 00:17:28.863 "name": "BaseBdev4", 00:17:28.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.863 "is_configured": false, 00:17:28.863 "data_offset": 0, 00:17:28.863 "data_size": 0 00:17:28.863 } 00:17:28.863 ] 00:17:28.863 }' 00:17:28.863 07:18:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:28.863 07:18:02 -- common/autotest_common.sh@10 -- # set +x 00:17:29.429 07:18:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:29.687 [2024-02-13 07:18:03.214720] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:29.687 BaseBdev3 00:17:29.687 07:18:03 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:29.687 07:18:03 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:17:29.687 07:18:03 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:29.687 07:18:03 -- common/autotest_common.sh@887 -- # local i 00:17:29.687 07:18:03 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:29.687 07:18:03 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:29.687 07:18:03 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:29.946 07:18:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:30.204 [ 00:17:30.204 { 00:17:30.204 "name": "BaseBdev3", 00:17:30.204 "aliases": [ 00:17:30.204 "c0a65579-a925-4084-9f24-583b0981d05c" 00:17:30.204 ], 00:17:30.204 "product_name": "Malloc disk", 00:17:30.204 "block_size": 512, 00:17:30.204 "num_blocks": 65536, 00:17:30.204 "uuid": "c0a65579-a925-4084-9f24-583b0981d05c", 00:17:30.204 "assigned_rate_limits": { 00:17:30.204 "rw_ios_per_sec": 0, 00:17:30.205 "rw_mbytes_per_sec": 0, 00:17:30.205 "r_mbytes_per_sec": 0, 00:17:30.205 "w_mbytes_per_sec": 0 00:17:30.205 }, 00:17:30.205 "claimed": true, 00:17:30.205 "claim_type": "exclusive_write", 00:17:30.205 "zoned": false, 00:17:30.205 "supported_io_types": { 00:17:30.205 "read": true, 00:17:30.205 "write": true, 00:17:30.205 "unmap": true, 00:17:30.205 "write_zeroes": true, 00:17:30.205 "flush": true, 00:17:30.205 "reset": true, 00:17:30.205 "compare": false, 00:17:30.205 "compare_and_write": false, 00:17:30.205 "abort": true, 00:17:30.205 "nvme_admin": false, 00:17:30.205 "nvme_io": false 00:17:30.205 }, 00:17:30.205 "memory_domains": [ 00:17:30.205 { 00:17:30.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.205 "dma_device_type": 2 00:17:30.205 } 00:17:30.205 ], 00:17:30.205 "driver_specific": {} 00:17:30.205 } 00:17:30.205 ] 00:17:30.205 07:18:03 -- common/autotest_common.sh@893 -- # return 0 00:17:30.205 07:18:03 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:30.205 07:18:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:30.205 07:18:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:30.205 07:18:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:30.205 07:18:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:30.205 07:18:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:30.205 07:18:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:30.205 07:18:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:30.205 07:18:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:30.205 07:18:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:30.205 07:18:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:30.205 07:18:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:30.205 07:18:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.205 07:18:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.463 07:18:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:30.463 "name": "Existed_Raid", 00:17:30.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.463 "strip_size_kb": 64, 00:17:30.463 "state": "configuring", 00:17:30.463 "raid_level": "raid0", 00:17:30.463 "superblock": false, 00:17:30.463 "num_base_bdevs": 4, 00:17:30.463 "num_base_bdevs_discovered": 3, 00:17:30.463 "num_base_bdevs_operational": 4, 00:17:30.463 "base_bdevs_list": [ 00:17:30.463 { 00:17:30.463 "name": "BaseBdev1", 00:17:30.463 "uuid": "2606531f-0eda-4830-a55a-9594b23416a2", 00:17:30.463 "is_configured": true, 00:17:30.463 "data_offset": 0, 00:17:30.463 "data_size": 65536 00:17:30.463 }, 00:17:30.463 { 00:17:30.463 "name": "BaseBdev2", 00:17:30.463 "uuid": "6283506c-0d73-47af-b96f-6bd139be3948", 00:17:30.463 "is_configured": true, 00:17:30.463 "data_offset": 0, 00:17:30.463 "data_size": 65536 00:17:30.463 }, 00:17:30.463 { 00:17:30.463 "name": "BaseBdev3", 00:17:30.463 "uuid": "c0a65579-a925-4084-9f24-583b0981d05c", 00:17:30.463 "is_configured": true, 00:17:30.463 "data_offset": 0, 00:17:30.463 "data_size": 65536 00:17:30.463 }, 00:17:30.463 { 00:17:30.463 "name": "BaseBdev4", 00:17:30.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.463 "is_configured": false, 00:17:30.463 "data_offset": 0, 00:17:30.463 "data_size": 0 00:17:30.463 } 00:17:30.463 ] 00:17:30.463 }' 00:17:30.463 07:18:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:30.463 07:18:03 -- common/autotest_common.sh@10 -- # set +x 00:17:31.029 07:18:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:31.288 [2024-02-13 07:18:04.902044] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:31.288 [2024-02-13 07:18:04.902405] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:17:31.288 [2024-02-13 07:18:04.902451] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:17:31.288 [2024-02-13 07:18:04.902678] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:17:31.288 [2024-02-13 07:18:04.903188] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:17:31.288 [2024-02-13 07:18:04.903373] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:17:31.288 [2024-02-13 07:18:04.903785] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.288 BaseBdev4 00:17:31.288 07:18:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:31.288 07:18:04 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:17:31.288 07:18:04 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:31.288 07:18:04 -- common/autotest_common.sh@887 -- # local i 00:17:31.288 07:18:04 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:31.288 07:18:04 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:31.288 07:18:04 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:31.546 07:18:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:31.805 [ 00:17:31.805 { 00:17:31.805 "name": "BaseBdev4", 00:17:31.805 "aliases": [ 00:17:31.805 "4fa8d8bf-6266-44ec-b2ed-890cbd4cfc52" 00:17:31.805 ], 00:17:31.805 "product_name": "Malloc disk", 00:17:31.805 "block_size": 512, 00:17:31.805 "num_blocks": 65536, 00:17:31.805 "uuid": "4fa8d8bf-6266-44ec-b2ed-890cbd4cfc52", 00:17:31.805 "assigned_rate_limits": { 00:17:31.805 "rw_ios_per_sec": 0, 00:17:31.805 "rw_mbytes_per_sec": 0, 00:17:31.805 "r_mbytes_per_sec": 0, 00:17:31.805 "w_mbytes_per_sec": 0 00:17:31.805 }, 00:17:31.805 "claimed": true, 00:17:31.805 "claim_type": "exclusive_write", 00:17:31.805 "zoned": false, 00:17:31.805 "supported_io_types": { 00:17:31.805 "read": true, 00:17:31.805 "write": true, 00:17:31.805 "unmap": true, 00:17:31.805 "write_zeroes": true, 00:17:31.805 "flush": true, 00:17:31.805 "reset": true, 00:17:31.805 "compare": false, 00:17:31.805 "compare_and_write": false, 00:17:31.805 "abort": true, 00:17:31.805 "nvme_admin": false, 00:17:31.805 "nvme_io": false 00:17:31.805 }, 00:17:31.805 "memory_domains": [ 00:17:31.805 { 00:17:31.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.805 "dma_device_type": 2 00:17:31.805 } 00:17:31.805 ], 00:17:31.805 "driver_specific": {} 00:17:31.805 } 00:17:31.805 ] 00:17:31.805 07:18:05 -- common/autotest_common.sh@893 -- # return 0 00:17:31.805 07:18:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:31.805 07:18:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:31.805 07:18:05 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:31.805 07:18:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:31.805 07:18:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:31.805 07:18:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:31.805 07:18:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:31.805 07:18:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:31.805 07:18:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:31.805 07:18:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:31.805 07:18:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:31.805 07:18:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:31.805 07:18:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:31.805 07:18:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.064 07:18:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:32.064 "name": "Existed_Raid", 00:17:32.064 "uuid": "d6d43821-3d64-4400-a6cb-383d76e9040e", 00:17:32.064 "strip_size_kb": 64, 00:17:32.064 "state": "online", 00:17:32.064 "raid_level": "raid0", 00:17:32.064 "superblock": false, 00:17:32.064 "num_base_bdevs": 4, 00:17:32.064 "num_base_bdevs_discovered": 4, 00:17:32.064 "num_base_bdevs_operational": 4, 00:17:32.064 "base_bdevs_list": [ 00:17:32.064 { 00:17:32.064 "name": "BaseBdev1", 00:17:32.064 "uuid": "2606531f-0eda-4830-a55a-9594b23416a2", 00:17:32.064 "is_configured": true, 00:17:32.064 "data_offset": 0, 00:17:32.064 "data_size": 65536 00:17:32.064 }, 00:17:32.064 { 00:17:32.064 "name": "BaseBdev2", 00:17:32.064 "uuid": "6283506c-0d73-47af-b96f-6bd139be3948", 00:17:32.064 "is_configured": true, 00:17:32.064 "data_offset": 0, 00:17:32.064 "data_size": 65536 00:17:32.064 }, 00:17:32.064 { 00:17:32.064 "name": "BaseBdev3", 00:17:32.064 "uuid": "c0a65579-a925-4084-9f24-583b0981d05c", 00:17:32.064 "is_configured": true, 00:17:32.064 "data_offset": 0, 00:17:32.064 "data_size": 65536 00:17:32.064 }, 00:17:32.064 { 00:17:32.064 "name": "BaseBdev4", 00:17:32.064 "uuid": "4fa8d8bf-6266-44ec-b2ed-890cbd4cfc52", 00:17:32.064 "is_configured": true, 00:17:32.064 "data_offset": 0, 00:17:32.064 "data_size": 65536 00:17:32.064 } 00:17:32.064 ] 00:17:32.064 }' 00:17:32.064 07:18:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:32.064 07:18:05 -- common/autotest_common.sh@10 -- # set +x 00:17:32.632 07:18:06 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:32.890 [2024-02-13 07:18:06.474576] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:32.890 [2024-02-13 07:18:06.474810] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.890 [2024-02-13 07:18:06.474995] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.890 07:18:06 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:32.890 07:18:06 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:32.890 07:18:06 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:32.890 07:18:06 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:32.890 07:18:06 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:32.891 07:18:06 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:32.891 07:18:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:32.891 07:18:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:32.891 07:18:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:32.891 07:18:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:32.891 07:18:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:32.891 07:18:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:32.891 07:18:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:32.891 07:18:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:32.891 07:18:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:32.891 07:18:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.891 07:18:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.150 07:18:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:33.150 "name": "Existed_Raid", 00:17:33.150 "uuid": "d6d43821-3d64-4400-a6cb-383d76e9040e", 00:17:33.150 "strip_size_kb": 64, 00:17:33.150 "state": "offline", 00:17:33.150 "raid_level": "raid0", 00:17:33.150 "superblock": false, 00:17:33.150 "num_base_bdevs": 4, 00:17:33.150 "num_base_bdevs_discovered": 3, 00:17:33.150 "num_base_bdevs_operational": 3, 00:17:33.150 "base_bdevs_list": [ 00:17:33.150 { 00:17:33.150 "name": null, 00:17:33.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.150 "is_configured": false, 00:17:33.150 "data_offset": 0, 00:17:33.150 "data_size": 65536 00:17:33.150 }, 00:17:33.150 { 00:17:33.150 "name": "BaseBdev2", 00:17:33.150 "uuid": "6283506c-0d73-47af-b96f-6bd139be3948", 00:17:33.150 "is_configured": true, 00:17:33.150 "data_offset": 0, 00:17:33.150 "data_size": 65536 00:17:33.150 }, 00:17:33.150 { 00:17:33.150 "name": "BaseBdev3", 00:17:33.150 "uuid": "c0a65579-a925-4084-9f24-583b0981d05c", 00:17:33.150 "is_configured": true, 00:17:33.150 "data_offset": 0, 00:17:33.150 "data_size": 65536 00:17:33.150 }, 00:17:33.150 { 00:17:33.150 "name": "BaseBdev4", 00:17:33.150 "uuid": "4fa8d8bf-6266-44ec-b2ed-890cbd4cfc52", 00:17:33.150 "is_configured": true, 00:17:33.150 "data_offset": 0, 00:17:33.150 "data_size": 65536 00:17:33.150 } 00:17:33.150 ] 00:17:33.150 }' 00:17:33.150 07:18:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:33.150 07:18:06 -- common/autotest_common.sh@10 -- # set +x 00:17:34.086 07:18:07 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:34.086 07:18:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:34.086 07:18:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.086 07:18:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:34.086 07:18:07 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:34.086 07:18:07 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.086 07:18:07 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:34.370 [2024-02-13 07:18:07.911051] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:34.370 07:18:07 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:34.370 07:18:07 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:34.370 07:18:07 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.370 07:18:07 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:34.640 07:18:08 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:34.641 07:18:08 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:34.641 07:18:08 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:34.899 [2024-02-13 07:18:08.449091] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:34.899 07:18:08 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:34.899 07:18:08 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:34.899 07:18:08 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.899 07:18:08 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:35.158 07:18:08 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:35.158 07:18:08 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:35.158 07:18:08 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:35.417 [2024-02-13 07:18:09.025861] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:35.417 [2024-02-13 07:18:09.026108] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:17:35.676 07:18:09 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:35.676 07:18:09 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:35.676 07:18:09 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.676 07:18:09 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:35.676 07:18:09 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:35.676 07:18:09 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:35.676 07:18:09 -- bdev/bdev_raid.sh@287 -- # killprocess 122988 00:17:35.676 07:18:09 -- common/autotest_common.sh@924 -- # '[' -z 122988 ']' 00:17:35.676 07:18:09 -- common/autotest_common.sh@928 -- # kill -0 122988 00:17:35.676 07:18:09 -- common/autotest_common.sh@929 -- # uname 00:17:35.676 07:18:09 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:35.676 07:18:09 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 122988 00:17:35.676 killing process with pid 122988 00:17:35.676 07:18:09 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:17:35.676 07:18:09 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:17:35.676 07:18:09 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 122988' 00:17:35.676 07:18:09 -- common/autotest_common.sh@943 -- # kill 122988 00:17:35.676 07:18:09 -- common/autotest_common.sh@948 -- # wait 122988 00:17:35.676 [2024-02-13 07:18:09.346658] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:35.676 [2024-02-13 07:18:09.346832] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:37.054 ************************************ 00:17:37.054 END TEST raid_state_function_test 00:17:37.054 ************************************ 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:37.054 00:17:37.054 real 0m14.631s 00:17:37.054 user 0m26.336s 00:17:37.054 sys 0m1.596s 00:17:37.054 07:18:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:37.054 07:18:10 -- common/autotest_common.sh@10 -- # set +x 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:17:37.054 07:18:10 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:17:37.054 07:18:10 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:37.054 07:18:10 -- common/autotest_common.sh@10 -- # set +x 00:17:37.054 ************************************ 00:17:37.054 START TEST raid_state_function_test_sb 00:17:37.054 ************************************ 00:17:37.054 07:18:10 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid0 4 true 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@226 -- # raid_pid=123444 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 123444' 00:17:37.054 Process raid pid: 123444 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:37.054 07:18:10 -- bdev/bdev_raid.sh@228 -- # waitforlisten 123444 /var/tmp/spdk-raid.sock 00:17:37.054 07:18:10 -- common/autotest_common.sh@817 -- # '[' -z 123444 ']' 00:17:37.054 07:18:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:37.054 07:18:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:37.054 07:18:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:37.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:37.054 07:18:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:37.054 07:18:10 -- common/autotest_common.sh@10 -- # set +x 00:17:37.054 [2024-02-13 07:18:10.576775] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:17:37.054 [2024-02-13 07:18:10.577278] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.054 [2024-02-13 07:18:10.745044] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.313 [2024-02-13 07:18:10.983756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.572 [2024-02-13 07:18:11.177098] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:37.831 07:18:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:37.831 07:18:11 -- common/autotest_common.sh@850 -- # return 0 00:17:37.831 07:18:11 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:38.090 [2024-02-13 07:18:11.650727] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:38.090 [2024-02-13 07:18:11.650999] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:38.090 [2024-02-13 07:18:11.651118] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:38.090 [2024-02-13 07:18:11.651182] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:38.090 [2024-02-13 07:18:11.651274] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:38.090 [2024-02-13 07:18:11.651437] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:38.090 [2024-02-13 07:18:11.651545] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:38.090 [2024-02-13 07:18:11.651608] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:38.090 07:18:11 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:38.090 07:18:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:38.090 07:18:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:38.090 07:18:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:38.090 07:18:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:38.090 07:18:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:38.090 07:18:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:38.090 07:18:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:38.090 07:18:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:38.090 07:18:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:38.090 07:18:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.090 07:18:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:38.349 07:18:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:38.349 "name": "Existed_Raid", 00:17:38.349 "uuid": "97664ccb-1f43-4b73-b6fc-fe7029a80d40", 00:17:38.349 "strip_size_kb": 64, 00:17:38.349 "state": "configuring", 00:17:38.349 "raid_level": "raid0", 00:17:38.349 "superblock": true, 00:17:38.349 "num_base_bdevs": 4, 00:17:38.349 "num_base_bdevs_discovered": 0, 00:17:38.349 "num_base_bdevs_operational": 4, 00:17:38.349 "base_bdevs_list": [ 00:17:38.349 { 00:17:38.349 "name": "BaseBdev1", 00:17:38.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.349 "is_configured": false, 00:17:38.349 "data_offset": 0, 00:17:38.349 "data_size": 0 00:17:38.349 }, 00:17:38.349 { 00:17:38.349 "name": "BaseBdev2", 00:17:38.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.349 "is_configured": false, 00:17:38.349 "data_offset": 0, 00:17:38.349 "data_size": 0 00:17:38.349 }, 00:17:38.349 { 00:17:38.349 "name": "BaseBdev3", 00:17:38.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.349 "is_configured": false, 00:17:38.349 "data_offset": 0, 00:17:38.349 "data_size": 0 00:17:38.349 }, 00:17:38.349 { 00:17:38.349 "name": "BaseBdev4", 00:17:38.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.349 "is_configured": false, 00:17:38.349 "data_offset": 0, 00:17:38.349 "data_size": 0 00:17:38.349 } 00:17:38.349 ] 00:17:38.349 }' 00:17:38.349 07:18:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:38.349 07:18:11 -- common/autotest_common.sh@10 -- # set +x 00:17:38.916 07:18:12 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:39.174 [2024-02-13 07:18:12.822901] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:39.174 [2024-02-13 07:18:12.823348] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:17:39.174 07:18:12 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:39.433 [2024-02-13 07:18:13.091042] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:39.433 [2024-02-13 07:18:13.091324] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:39.433 [2024-02-13 07:18:13.091505] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:39.433 [2024-02-13 07:18:13.091584] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:39.433 [2024-02-13 07:18:13.091793] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:39.433 [2024-02-13 07:18:13.091886] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:39.433 [2024-02-13 07:18:13.091916] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:39.433 [2024-02-13 07:18:13.092150] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:39.433 07:18:13 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:40.001 [2024-02-13 07:18:13.385055] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.001 BaseBdev1 00:17:40.001 07:18:13 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:17:40.001 07:18:13 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:40.001 07:18:13 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:40.001 07:18:13 -- common/autotest_common.sh@887 -- # local i 00:17:40.001 07:18:13 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:40.001 07:18:13 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:40.001 07:18:13 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:40.001 07:18:13 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:40.260 [ 00:17:40.260 { 00:17:40.260 "name": "BaseBdev1", 00:17:40.260 "aliases": [ 00:17:40.260 "6b4482d3-88eb-4b50-85bc-98545cf4ca62" 00:17:40.260 ], 00:17:40.260 "product_name": "Malloc disk", 00:17:40.260 "block_size": 512, 00:17:40.260 "num_blocks": 65536, 00:17:40.260 "uuid": "6b4482d3-88eb-4b50-85bc-98545cf4ca62", 00:17:40.260 "assigned_rate_limits": { 00:17:40.260 "rw_ios_per_sec": 0, 00:17:40.260 "rw_mbytes_per_sec": 0, 00:17:40.260 "r_mbytes_per_sec": 0, 00:17:40.260 "w_mbytes_per_sec": 0 00:17:40.260 }, 00:17:40.260 "claimed": true, 00:17:40.260 "claim_type": "exclusive_write", 00:17:40.260 "zoned": false, 00:17:40.260 "supported_io_types": { 00:17:40.260 "read": true, 00:17:40.260 "write": true, 00:17:40.260 "unmap": true, 00:17:40.260 "write_zeroes": true, 00:17:40.260 "flush": true, 00:17:40.260 "reset": true, 00:17:40.260 "compare": false, 00:17:40.260 "compare_and_write": false, 00:17:40.260 "abort": true, 00:17:40.260 "nvme_admin": false, 00:17:40.260 "nvme_io": false 00:17:40.260 }, 00:17:40.260 "memory_domains": [ 00:17:40.260 { 00:17:40.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:40.260 "dma_device_type": 2 00:17:40.260 } 00:17:40.260 ], 00:17:40.260 "driver_specific": {} 00:17:40.260 } 00:17:40.260 ] 00:17:40.260 07:18:13 -- common/autotest_common.sh@893 -- # return 0 00:17:40.260 07:18:13 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:40.260 07:18:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:40.260 07:18:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:40.260 07:18:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:40.260 07:18:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:40.260 07:18:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:40.260 07:18:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:40.260 07:18:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:40.260 07:18:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:40.260 07:18:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:40.260 07:18:13 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.260 07:18:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:40.828 07:18:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:40.828 "name": "Existed_Raid", 00:17:40.828 "uuid": "679d078c-7e21-4652-ad50-c49f319d10a2", 00:17:40.828 "strip_size_kb": 64, 00:17:40.828 "state": "configuring", 00:17:40.828 "raid_level": "raid0", 00:17:40.828 "superblock": true, 00:17:40.828 "num_base_bdevs": 4, 00:17:40.828 "num_base_bdevs_discovered": 1, 00:17:40.828 "num_base_bdevs_operational": 4, 00:17:40.828 "base_bdevs_list": [ 00:17:40.828 { 00:17:40.828 "name": "BaseBdev1", 00:17:40.828 "uuid": "6b4482d3-88eb-4b50-85bc-98545cf4ca62", 00:17:40.828 "is_configured": true, 00:17:40.828 "data_offset": 2048, 00:17:40.828 "data_size": 63488 00:17:40.828 }, 00:17:40.828 { 00:17:40.828 "name": "BaseBdev2", 00:17:40.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.828 "is_configured": false, 00:17:40.828 "data_offset": 0, 00:17:40.828 "data_size": 0 00:17:40.828 }, 00:17:40.828 { 00:17:40.828 "name": "BaseBdev3", 00:17:40.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.828 "is_configured": false, 00:17:40.828 "data_offset": 0, 00:17:40.828 "data_size": 0 00:17:40.828 }, 00:17:40.828 { 00:17:40.828 "name": "BaseBdev4", 00:17:40.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.828 "is_configured": false, 00:17:40.828 "data_offset": 0, 00:17:40.828 "data_size": 0 00:17:40.828 } 00:17:40.828 ] 00:17:40.828 }' 00:17:40.828 07:18:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:40.828 07:18:14 -- common/autotest_common.sh@10 -- # set +x 00:17:41.395 07:18:14 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:41.395 [2024-02-13 07:18:15.073452] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:41.395 [2024-02-13 07:18:15.073670] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:17:41.395 07:18:15 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:17:41.395 07:18:15 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:41.962 07:18:15 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:17:41.962 BaseBdev1 00:17:41.963 07:18:15 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:17:41.963 07:18:15 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:17:41.963 07:18:15 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:41.963 07:18:15 -- common/autotest_common.sh@887 -- # local i 00:17:41.963 07:18:15 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:41.963 07:18:15 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:41.963 07:18:15 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:42.221 07:18:15 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:42.479 [ 00:17:42.479 { 00:17:42.479 "name": "BaseBdev1", 00:17:42.479 "aliases": [ 00:17:42.479 "4fb6f9a3-2bc9-46f7-8bfa-f63c71c56d62" 00:17:42.479 ], 00:17:42.479 "product_name": "Malloc disk", 00:17:42.479 "block_size": 512, 00:17:42.479 "num_blocks": 65536, 00:17:42.479 "uuid": "4fb6f9a3-2bc9-46f7-8bfa-f63c71c56d62", 00:17:42.479 "assigned_rate_limits": { 00:17:42.479 "rw_ios_per_sec": 0, 00:17:42.479 "rw_mbytes_per_sec": 0, 00:17:42.479 "r_mbytes_per_sec": 0, 00:17:42.479 "w_mbytes_per_sec": 0 00:17:42.479 }, 00:17:42.479 "claimed": false, 00:17:42.479 "zoned": false, 00:17:42.479 "supported_io_types": { 00:17:42.479 "read": true, 00:17:42.479 "write": true, 00:17:42.479 "unmap": true, 00:17:42.479 "write_zeroes": true, 00:17:42.479 "flush": true, 00:17:42.479 "reset": true, 00:17:42.479 "compare": false, 00:17:42.479 "compare_and_write": false, 00:17:42.479 "abort": true, 00:17:42.479 "nvme_admin": false, 00:17:42.479 "nvme_io": false 00:17:42.479 }, 00:17:42.479 "memory_domains": [ 00:17:42.479 { 00:17:42.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.479 "dma_device_type": 2 00:17:42.479 } 00:17:42.479 ], 00:17:42.480 "driver_specific": {} 00:17:42.480 } 00:17:42.480 ] 00:17:42.480 07:18:16 -- common/autotest_common.sh@893 -- # return 0 00:17:42.480 07:18:16 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:17:42.738 [2024-02-13 07:18:16.314670] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.738 [2024-02-13 07:18:16.316961] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:42.738 [2024-02-13 07:18:16.317192] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:42.738 [2024-02-13 07:18:16.317317] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:42.738 [2024-02-13 07:18:16.317450] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:42.738 [2024-02-13 07:18:16.317546] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:42.738 [2024-02-13 07:18:16.317600] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:42.738 07:18:16 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:17:42.738 07:18:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:42.738 07:18:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:42.738 07:18:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:42.738 07:18:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:42.738 07:18:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:42.738 07:18:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:42.739 07:18:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:42.739 07:18:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:42.739 07:18:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:42.739 07:18:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:42.739 07:18:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:42.739 07:18:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:42.739 07:18:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:42.997 07:18:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:42.997 "name": "Existed_Raid", 00:17:42.997 "uuid": "a9805bdd-fe84-43ad-963b-a2e03bb40f08", 00:17:42.997 "strip_size_kb": 64, 00:17:42.997 "state": "configuring", 00:17:42.997 "raid_level": "raid0", 00:17:42.997 "superblock": true, 00:17:42.997 "num_base_bdevs": 4, 00:17:42.997 "num_base_bdevs_discovered": 1, 00:17:42.997 "num_base_bdevs_operational": 4, 00:17:42.997 "base_bdevs_list": [ 00:17:42.997 { 00:17:42.997 "name": "BaseBdev1", 00:17:42.997 "uuid": "4fb6f9a3-2bc9-46f7-8bfa-f63c71c56d62", 00:17:42.997 "is_configured": true, 00:17:42.997 "data_offset": 2048, 00:17:42.997 "data_size": 63488 00:17:42.997 }, 00:17:42.997 { 00:17:42.997 "name": "BaseBdev2", 00:17:42.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.997 "is_configured": false, 00:17:42.997 "data_offset": 0, 00:17:42.997 "data_size": 0 00:17:42.997 }, 00:17:42.997 { 00:17:42.997 "name": "BaseBdev3", 00:17:42.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.997 "is_configured": false, 00:17:42.997 "data_offset": 0, 00:17:42.997 "data_size": 0 00:17:42.997 }, 00:17:42.997 { 00:17:42.997 "name": "BaseBdev4", 00:17:42.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.997 "is_configured": false, 00:17:42.997 "data_offset": 0, 00:17:42.997 "data_size": 0 00:17:42.997 } 00:17:42.997 ] 00:17:42.997 }' 00:17:42.997 07:18:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:42.997 07:18:16 -- common/autotest_common.sh@10 -- # set +x 00:17:43.565 07:18:17 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:17:44.132 [2024-02-13 07:18:17.519599] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.132 BaseBdev2 00:17:44.132 07:18:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:17:44.132 07:18:17 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:17:44.132 07:18:17 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:44.132 07:18:17 -- common/autotest_common.sh@887 -- # local i 00:17:44.132 07:18:17 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:44.132 07:18:17 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:44.132 07:18:17 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:44.132 07:18:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:44.394 [ 00:17:44.394 { 00:17:44.394 "name": "BaseBdev2", 00:17:44.394 "aliases": [ 00:17:44.394 "56ae53f7-d6f0-4987-b094-36c49e61a425" 00:17:44.394 ], 00:17:44.394 "product_name": "Malloc disk", 00:17:44.394 "block_size": 512, 00:17:44.394 "num_blocks": 65536, 00:17:44.394 "uuid": "56ae53f7-d6f0-4987-b094-36c49e61a425", 00:17:44.394 "assigned_rate_limits": { 00:17:44.394 "rw_ios_per_sec": 0, 00:17:44.394 "rw_mbytes_per_sec": 0, 00:17:44.394 "r_mbytes_per_sec": 0, 00:17:44.394 "w_mbytes_per_sec": 0 00:17:44.394 }, 00:17:44.394 "claimed": true, 00:17:44.394 "claim_type": "exclusive_write", 00:17:44.394 "zoned": false, 00:17:44.394 "supported_io_types": { 00:17:44.394 "read": true, 00:17:44.394 "write": true, 00:17:44.394 "unmap": true, 00:17:44.394 "write_zeroes": true, 00:17:44.394 "flush": true, 00:17:44.394 "reset": true, 00:17:44.394 "compare": false, 00:17:44.394 "compare_and_write": false, 00:17:44.394 "abort": true, 00:17:44.394 "nvme_admin": false, 00:17:44.394 "nvme_io": false 00:17:44.394 }, 00:17:44.394 "memory_domains": [ 00:17:44.394 { 00:17:44.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:44.394 "dma_device_type": 2 00:17:44.394 } 00:17:44.394 ], 00:17:44.394 "driver_specific": {} 00:17:44.394 } 00:17:44.394 ] 00:17:44.394 07:18:17 -- common/autotest_common.sh@893 -- # return 0 00:17:44.394 07:18:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:44.394 07:18:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:44.394 07:18:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:44.394 07:18:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:44.394 07:18:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:44.394 07:18:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:44.394 07:18:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:44.394 07:18:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:44.394 07:18:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:44.394 07:18:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:44.394 07:18:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:44.394 07:18:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:44.394 07:18:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.394 07:18:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.656 07:18:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:44.656 "name": "Existed_Raid", 00:17:44.656 "uuid": "a9805bdd-fe84-43ad-963b-a2e03bb40f08", 00:17:44.656 "strip_size_kb": 64, 00:17:44.656 "state": "configuring", 00:17:44.656 "raid_level": "raid0", 00:17:44.656 "superblock": true, 00:17:44.656 "num_base_bdevs": 4, 00:17:44.656 "num_base_bdevs_discovered": 2, 00:17:44.656 "num_base_bdevs_operational": 4, 00:17:44.656 "base_bdevs_list": [ 00:17:44.656 { 00:17:44.656 "name": "BaseBdev1", 00:17:44.656 "uuid": "4fb6f9a3-2bc9-46f7-8bfa-f63c71c56d62", 00:17:44.656 "is_configured": true, 00:17:44.656 "data_offset": 2048, 00:17:44.656 "data_size": 63488 00:17:44.656 }, 00:17:44.656 { 00:17:44.656 "name": "BaseBdev2", 00:17:44.656 "uuid": "56ae53f7-d6f0-4987-b094-36c49e61a425", 00:17:44.656 "is_configured": true, 00:17:44.656 "data_offset": 2048, 00:17:44.656 "data_size": 63488 00:17:44.656 }, 00:17:44.656 { 00:17:44.656 "name": "BaseBdev3", 00:17:44.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.656 "is_configured": false, 00:17:44.656 "data_offset": 0, 00:17:44.656 "data_size": 0 00:17:44.656 }, 00:17:44.656 { 00:17:44.656 "name": "BaseBdev4", 00:17:44.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.656 "is_configured": false, 00:17:44.656 "data_offset": 0, 00:17:44.656 "data_size": 0 00:17:44.656 } 00:17:44.656 ] 00:17:44.656 }' 00:17:44.656 07:18:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:44.656 07:18:18 -- common/autotest_common.sh@10 -- # set +x 00:17:45.224 07:18:18 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:17:45.483 [2024-02-13 07:18:19.023344] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:45.483 BaseBdev3 00:17:45.483 07:18:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:17:45.483 07:18:19 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:17:45.483 07:18:19 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:45.483 07:18:19 -- common/autotest_common.sh@887 -- # local i 00:17:45.483 07:18:19 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:45.483 07:18:19 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:45.483 07:18:19 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:45.742 07:18:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:46.001 [ 00:17:46.001 { 00:17:46.001 "name": "BaseBdev3", 00:17:46.001 "aliases": [ 00:17:46.001 "09bb7c1a-fe15-4979-91ff-b91d04061aed" 00:17:46.001 ], 00:17:46.001 "product_name": "Malloc disk", 00:17:46.001 "block_size": 512, 00:17:46.001 "num_blocks": 65536, 00:17:46.001 "uuid": "09bb7c1a-fe15-4979-91ff-b91d04061aed", 00:17:46.001 "assigned_rate_limits": { 00:17:46.001 "rw_ios_per_sec": 0, 00:17:46.001 "rw_mbytes_per_sec": 0, 00:17:46.001 "r_mbytes_per_sec": 0, 00:17:46.001 "w_mbytes_per_sec": 0 00:17:46.001 }, 00:17:46.001 "claimed": true, 00:17:46.001 "claim_type": "exclusive_write", 00:17:46.001 "zoned": false, 00:17:46.001 "supported_io_types": { 00:17:46.001 "read": true, 00:17:46.001 "write": true, 00:17:46.001 "unmap": true, 00:17:46.001 "write_zeroes": true, 00:17:46.001 "flush": true, 00:17:46.001 "reset": true, 00:17:46.001 "compare": false, 00:17:46.001 "compare_and_write": false, 00:17:46.001 "abort": true, 00:17:46.001 "nvme_admin": false, 00:17:46.001 "nvme_io": false 00:17:46.001 }, 00:17:46.001 "memory_domains": [ 00:17:46.001 { 00:17:46.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.001 "dma_device_type": 2 00:17:46.001 } 00:17:46.001 ], 00:17:46.001 "driver_specific": {} 00:17:46.001 } 00:17:46.001 ] 00:17:46.001 07:18:19 -- common/autotest_common.sh@893 -- # return 0 00:17:46.001 07:18:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:46.001 07:18:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:46.001 07:18:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:17:46.001 07:18:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:46.001 07:18:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:46.001 07:18:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:46.001 07:18:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:46.001 07:18:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:46.001 07:18:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:46.001 07:18:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:46.001 07:18:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:46.001 07:18:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:46.001 07:18:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.001 07:18:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.260 07:18:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:46.260 "name": "Existed_Raid", 00:17:46.260 "uuid": "a9805bdd-fe84-43ad-963b-a2e03bb40f08", 00:17:46.260 "strip_size_kb": 64, 00:17:46.260 "state": "configuring", 00:17:46.260 "raid_level": "raid0", 00:17:46.260 "superblock": true, 00:17:46.260 "num_base_bdevs": 4, 00:17:46.260 "num_base_bdevs_discovered": 3, 00:17:46.260 "num_base_bdevs_operational": 4, 00:17:46.260 "base_bdevs_list": [ 00:17:46.260 { 00:17:46.260 "name": "BaseBdev1", 00:17:46.260 "uuid": "4fb6f9a3-2bc9-46f7-8bfa-f63c71c56d62", 00:17:46.260 "is_configured": true, 00:17:46.260 "data_offset": 2048, 00:17:46.260 "data_size": 63488 00:17:46.260 }, 00:17:46.260 { 00:17:46.260 "name": "BaseBdev2", 00:17:46.260 "uuid": "56ae53f7-d6f0-4987-b094-36c49e61a425", 00:17:46.260 "is_configured": true, 00:17:46.260 "data_offset": 2048, 00:17:46.260 "data_size": 63488 00:17:46.260 }, 00:17:46.260 { 00:17:46.260 "name": "BaseBdev3", 00:17:46.260 "uuid": "09bb7c1a-fe15-4979-91ff-b91d04061aed", 00:17:46.260 "is_configured": true, 00:17:46.260 "data_offset": 2048, 00:17:46.260 "data_size": 63488 00:17:46.260 }, 00:17:46.260 { 00:17:46.260 "name": "BaseBdev4", 00:17:46.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.260 "is_configured": false, 00:17:46.260 "data_offset": 0, 00:17:46.260 "data_size": 0 00:17:46.260 } 00:17:46.260 ] 00:17:46.260 }' 00:17:46.260 07:18:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:46.260 07:18:19 -- common/autotest_common.sh@10 -- # set +x 00:17:46.826 07:18:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:17:47.085 [2024-02-13 07:18:20.601295] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:47.086 [2024-02-13 07:18:20.601686] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:17:47.086 [2024-02-13 07:18:20.601803] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:47.086 [2024-02-13 07:18:20.601991] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:17:47.086 BaseBdev4 00:17:47.086 [2024-02-13 07:18:20.602551] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:17:47.086 [2024-02-13 07:18:20.602683] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:17:47.086 [2024-02-13 07:18:20.602946] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:47.086 07:18:20 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:17:47.086 07:18:20 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:17:47.086 07:18:20 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:17:47.086 07:18:20 -- common/autotest_common.sh@887 -- # local i 00:17:47.086 07:18:20 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:17:47.086 07:18:20 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:17:47.086 07:18:20 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:47.344 07:18:20 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:47.604 [ 00:17:47.604 { 00:17:47.604 "name": "BaseBdev4", 00:17:47.604 "aliases": [ 00:17:47.604 "b9f1a059-5100-4eef-b89b-7a9444967e33" 00:17:47.604 ], 00:17:47.604 "product_name": "Malloc disk", 00:17:47.604 "block_size": 512, 00:17:47.604 "num_blocks": 65536, 00:17:47.604 "uuid": "b9f1a059-5100-4eef-b89b-7a9444967e33", 00:17:47.604 "assigned_rate_limits": { 00:17:47.604 "rw_ios_per_sec": 0, 00:17:47.604 "rw_mbytes_per_sec": 0, 00:17:47.604 "r_mbytes_per_sec": 0, 00:17:47.604 "w_mbytes_per_sec": 0 00:17:47.604 }, 00:17:47.604 "claimed": true, 00:17:47.604 "claim_type": "exclusive_write", 00:17:47.604 "zoned": false, 00:17:47.604 "supported_io_types": { 00:17:47.604 "read": true, 00:17:47.604 "write": true, 00:17:47.604 "unmap": true, 00:17:47.604 "write_zeroes": true, 00:17:47.604 "flush": true, 00:17:47.604 "reset": true, 00:17:47.604 "compare": false, 00:17:47.604 "compare_and_write": false, 00:17:47.604 "abort": true, 00:17:47.604 "nvme_admin": false, 00:17:47.604 "nvme_io": false 00:17:47.604 }, 00:17:47.604 "memory_domains": [ 00:17:47.604 { 00:17:47.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.604 "dma_device_type": 2 00:17:47.604 } 00:17:47.604 ], 00:17:47.604 "driver_specific": {} 00:17:47.604 } 00:17:47.604 ] 00:17:47.604 07:18:21 -- common/autotest_common.sh@893 -- # return 0 00:17:47.604 07:18:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:17:47.604 07:18:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:17:47.604 07:18:21 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:17:47.604 07:18:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:47.604 07:18:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:47.604 07:18:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:47.604 07:18:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:47.604 07:18:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:47.604 07:18:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:47.604 07:18:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:47.604 07:18:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:47.604 07:18:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:47.604 07:18:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.604 07:18:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.604 07:18:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:47.604 "name": "Existed_Raid", 00:17:47.604 "uuid": "a9805bdd-fe84-43ad-963b-a2e03bb40f08", 00:17:47.604 "strip_size_kb": 64, 00:17:47.604 "state": "online", 00:17:47.604 "raid_level": "raid0", 00:17:47.604 "superblock": true, 00:17:47.604 "num_base_bdevs": 4, 00:17:47.604 "num_base_bdevs_discovered": 4, 00:17:47.604 "num_base_bdevs_operational": 4, 00:17:47.604 "base_bdevs_list": [ 00:17:47.604 { 00:17:47.604 "name": "BaseBdev1", 00:17:47.604 "uuid": "4fb6f9a3-2bc9-46f7-8bfa-f63c71c56d62", 00:17:47.604 "is_configured": true, 00:17:47.604 "data_offset": 2048, 00:17:47.604 "data_size": 63488 00:17:47.604 }, 00:17:47.604 { 00:17:47.604 "name": "BaseBdev2", 00:17:47.604 "uuid": "56ae53f7-d6f0-4987-b094-36c49e61a425", 00:17:47.604 "is_configured": true, 00:17:47.604 "data_offset": 2048, 00:17:47.604 "data_size": 63488 00:17:47.604 }, 00:17:47.604 { 00:17:47.604 "name": "BaseBdev3", 00:17:47.604 "uuid": "09bb7c1a-fe15-4979-91ff-b91d04061aed", 00:17:47.604 "is_configured": true, 00:17:47.604 "data_offset": 2048, 00:17:47.604 "data_size": 63488 00:17:47.604 }, 00:17:47.604 { 00:17:47.604 "name": "BaseBdev4", 00:17:47.604 "uuid": "b9f1a059-5100-4eef-b89b-7a9444967e33", 00:17:47.604 "is_configured": true, 00:17:47.604 "data_offset": 2048, 00:17:47.604 "data_size": 63488 00:17:47.604 } 00:17:47.604 ] 00:17:47.604 }' 00:17:47.604 07:18:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:47.604 07:18:21 -- common/autotest_common.sh@10 -- # set +x 00:17:48.555 07:18:21 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:48.555 [2024-02-13 07:18:22.146149] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:48.555 [2024-02-13 07:18:22.146419] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.555 [2024-02-13 07:18:22.146634] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.555 07:18:22 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:17:48.555 07:18:22 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:17:48.555 07:18:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:17:48.555 07:18:22 -- bdev/bdev_raid.sh@197 -- # return 1 00:17:48.555 07:18:22 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:17:48.555 07:18:22 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:17:48.555 07:18:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:17:48.555 07:18:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:17:48.555 07:18:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:48.555 07:18:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:48.555 07:18:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:17:48.555 07:18:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:48.555 07:18:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:48.555 07:18:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:48.555 07:18:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:48.555 07:18:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.555 07:18:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:48.814 07:18:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:48.814 "name": "Existed_Raid", 00:17:48.814 "uuid": "a9805bdd-fe84-43ad-963b-a2e03bb40f08", 00:17:48.814 "strip_size_kb": 64, 00:17:48.814 "state": "offline", 00:17:48.814 "raid_level": "raid0", 00:17:48.814 "superblock": true, 00:17:48.814 "num_base_bdevs": 4, 00:17:48.814 "num_base_bdevs_discovered": 3, 00:17:48.814 "num_base_bdevs_operational": 3, 00:17:48.814 "base_bdevs_list": [ 00:17:48.814 { 00:17:48.814 "name": null, 00:17:48.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.814 "is_configured": false, 00:17:48.814 "data_offset": 2048, 00:17:48.814 "data_size": 63488 00:17:48.814 }, 00:17:48.814 { 00:17:48.814 "name": "BaseBdev2", 00:17:48.814 "uuid": "56ae53f7-d6f0-4987-b094-36c49e61a425", 00:17:48.814 "is_configured": true, 00:17:48.814 "data_offset": 2048, 00:17:48.814 "data_size": 63488 00:17:48.814 }, 00:17:48.814 { 00:17:48.814 "name": "BaseBdev3", 00:17:48.814 "uuid": "09bb7c1a-fe15-4979-91ff-b91d04061aed", 00:17:48.814 "is_configured": true, 00:17:48.814 "data_offset": 2048, 00:17:48.814 "data_size": 63488 00:17:48.814 }, 00:17:48.814 { 00:17:48.814 "name": "BaseBdev4", 00:17:48.814 "uuid": "b9f1a059-5100-4eef-b89b-7a9444967e33", 00:17:48.814 "is_configured": true, 00:17:48.814 "data_offset": 2048, 00:17:48.814 "data_size": 63488 00:17:48.814 } 00:17:48.814 ] 00:17:48.814 }' 00:17:48.814 07:18:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:48.814 07:18:22 -- common/autotest_common.sh@10 -- # set +x 00:17:49.750 07:18:23 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:17:49.750 07:18:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:49.750 07:18:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.750 07:18:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:49.750 07:18:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:49.750 07:18:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:49.750 07:18:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:50.009 [2024-02-13 07:18:23.530048] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:50.009 07:18:23 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:50.009 07:18:23 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:50.009 07:18:23 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.009 07:18:23 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:50.268 07:18:23 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:50.268 07:18:23 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.268 07:18:23 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:17:50.527 [2024-02-13 07:18:24.041636] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:50.527 07:18:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:50.527 07:18:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:50.527 07:18:24 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.527 07:18:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:17:50.785 07:18:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:17:50.785 07:18:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:50.785 07:18:24 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:17:51.043 [2024-02-13 07:18:24.566039] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:51.043 [2024-02-13 07:18:24.566335] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:17:51.043 07:18:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:17:51.043 07:18:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:17:51.043 07:18:24 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:51.043 07:18:24 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:17:51.301 07:18:24 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:17:51.301 07:18:24 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:17:51.301 07:18:24 -- bdev/bdev_raid.sh@287 -- # killprocess 123444 00:17:51.301 07:18:24 -- common/autotest_common.sh@924 -- # '[' -z 123444 ']' 00:17:51.301 07:18:24 -- common/autotest_common.sh@928 -- # kill -0 123444 00:17:51.301 07:18:24 -- common/autotest_common.sh@929 -- # uname 00:17:51.301 07:18:24 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:17:51.301 07:18:24 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 123444 00:17:51.301 killing process with pid 123444 00:17:51.301 07:18:24 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:17:51.301 07:18:24 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:17:51.301 07:18:24 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 123444' 00:17:51.301 07:18:24 -- common/autotest_common.sh@943 -- # kill 123444 00:17:51.301 07:18:24 -- common/autotest_common.sh@948 -- # wait 123444 00:17:51.301 [2024-02-13 07:18:24.891578] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:51.301 [2024-02-13 07:18:24.891698] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:52.676 ************************************ 00:17:52.676 END TEST raid_state_function_test_sb 00:17:52.676 ************************************ 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@289 -- # return 0 00:17:52.676 00:17:52.676 real 0m15.543s 00:17:52.676 user 0m27.813s 00:17:52.676 sys 0m1.726s 00:17:52.676 07:18:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:52.676 07:18:26 -- common/autotest_common.sh@10 -- # set +x 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:17:52.676 07:18:26 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:17:52.676 07:18:26 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:17:52.676 07:18:26 -- common/autotest_common.sh@10 -- # set +x 00:17:52.676 ************************************ 00:17:52.676 START TEST raid_superblock_test 00:17:52.676 ************************************ 00:17:52.676 07:18:26 -- common/autotest_common.sh@1102 -- # raid_superblock_test raid0 4 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@357 -- # raid_pid=123942 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:52.676 07:18:26 -- bdev/bdev_raid.sh@358 -- # waitforlisten 123942 /var/tmp/spdk-raid.sock 00:17:52.676 07:18:26 -- common/autotest_common.sh@817 -- # '[' -z 123942 ']' 00:17:52.676 07:18:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:52.676 07:18:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:52.676 07:18:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:52.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:52.676 07:18:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:52.676 07:18:26 -- common/autotest_common.sh@10 -- # set +x 00:17:52.676 [2024-02-13 07:18:26.198078] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:17:52.676 [2024-02-13 07:18:26.198695] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123942 ] 00:17:52.934 [2024-02-13 07:18:26.383471] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.192 [2024-02-13 07:18:26.631707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.192 [2024-02-13 07:18:26.817675] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:53.450 07:18:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:53.450 07:18:27 -- common/autotest_common.sh@850 -- # return 0 00:17:53.450 07:18:27 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:17:53.450 07:18:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:53.450 07:18:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:17:53.450 07:18:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:17:53.450 07:18:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:53.450 07:18:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.450 07:18:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.450 07:18:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.450 07:18:27 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:17:53.709 malloc1 00:17:53.709 07:18:27 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:53.972 [2024-02-13 07:18:27.567091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:53.972 [2024-02-13 07:18:27.567379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.972 [2024-02-13 07:18:27.567521] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:17:53.972 [2024-02-13 07:18:27.567685] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.972 [2024-02-13 07:18:27.569889] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.972 [2024-02-13 07:18:27.570051] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:53.972 pt1 00:17:53.972 07:18:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:53.972 07:18:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:53.972 07:18:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:17:53.972 07:18:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:17:53.972 07:18:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:53.972 07:18:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:53.972 07:18:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:53.972 07:18:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:53.972 07:18:27 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:17:54.240 malloc2 00:17:54.240 07:18:27 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:54.498 [2024-02-13 07:18:28.053003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:54.498 [2024-02-13 07:18:28.053344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.498 [2024-02-13 07:18:28.053504] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:17:54.498 [2024-02-13 07:18:28.053684] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.498 [2024-02-13 07:18:28.055953] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.498 [2024-02-13 07:18:28.056117] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:54.498 pt2 00:17:54.498 07:18:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:54.498 07:18:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:54.498 07:18:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:17:54.498 07:18:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:17:54.498 07:18:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:54.498 07:18:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:54.498 07:18:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:54.498 07:18:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:54.498 07:18:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:17:54.757 malloc3 00:17:54.757 07:18:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:55.016 [2024-02-13 07:18:28.499299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:55.016 [2024-02-13 07:18:28.499543] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.016 [2024-02-13 07:18:28.499618] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:17:55.016 [2024-02-13 07:18:28.499939] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.016 [2024-02-13 07:18:28.502140] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.016 [2024-02-13 07:18:28.502306] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:55.016 pt3 00:17:55.016 07:18:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:55.016 07:18:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:55.017 07:18:28 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:17:55.017 07:18:28 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:17:55.017 07:18:28 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:55.017 07:18:28 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:55.017 07:18:28 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:17:55.017 07:18:28 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:55.017 07:18:28 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:17:55.276 malloc4 00:17:55.276 07:18:28 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:55.276 [2024-02-13 07:18:28.961563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:55.276 [2024-02-13 07:18:28.961895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.276 [2024-02-13 07:18:28.962098] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:55.276 [2024-02-13 07:18:28.962238] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.276 [2024-02-13 07:18:28.964877] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.276 [2024-02-13 07:18:28.965057] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:55.276 pt4 00:17:55.535 07:18:28 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:17:55.535 07:18:28 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:17:55.535 07:18:28 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:17:55.535 [2024-02-13 07:18:29.165817] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:55.535 [2024-02-13 07:18:29.168350] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:55.535 [2024-02-13 07:18:29.168586] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:55.535 [2024-02-13 07:18:29.168734] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:55.535 [2024-02-13 07:18:29.169090] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:17:55.535 [2024-02-13 07:18:29.169211] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:17:55.535 [2024-02-13 07:18:29.169436] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:17:55.535 [2024-02-13 07:18:29.169981] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:17:55.535 [2024-02-13 07:18:29.170099] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:17:55.535 [2024-02-13 07:18:29.170433] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.535 07:18:29 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:17:55.535 07:18:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:55.535 07:18:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:17:55.535 07:18:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:55.535 07:18:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:55.535 07:18:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:55.535 07:18:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:55.535 07:18:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:55.535 07:18:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:55.535 07:18:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:55.535 07:18:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.535 07:18:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.794 07:18:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:55.794 "name": "raid_bdev1", 00:17:55.794 "uuid": "31c0c881-eb54-48b1-85d7-686a15d10cec", 00:17:55.794 "strip_size_kb": 64, 00:17:55.794 "state": "online", 00:17:55.794 "raid_level": "raid0", 00:17:55.794 "superblock": true, 00:17:55.794 "num_base_bdevs": 4, 00:17:55.794 "num_base_bdevs_discovered": 4, 00:17:55.794 "num_base_bdevs_operational": 4, 00:17:55.794 "base_bdevs_list": [ 00:17:55.794 { 00:17:55.794 "name": "pt1", 00:17:55.794 "uuid": "0e10d21c-d3bd-53d3-8889-cd434b35bbae", 00:17:55.794 "is_configured": true, 00:17:55.794 "data_offset": 2048, 00:17:55.794 "data_size": 63488 00:17:55.794 }, 00:17:55.794 { 00:17:55.794 "name": "pt2", 00:17:55.794 "uuid": "6c57b6c3-c5aa-5b00-8e9d-bfd660097702", 00:17:55.794 "is_configured": true, 00:17:55.794 "data_offset": 2048, 00:17:55.794 "data_size": 63488 00:17:55.794 }, 00:17:55.794 { 00:17:55.794 "name": "pt3", 00:17:55.794 "uuid": "b0a293ad-f9eb-5112-8db9-9dc99a0947b5", 00:17:55.794 "is_configured": true, 00:17:55.794 "data_offset": 2048, 00:17:55.794 "data_size": 63488 00:17:55.794 }, 00:17:55.794 { 00:17:55.794 "name": "pt4", 00:17:55.794 "uuid": "8fb3f9c3-fa83-581a-9966-8f7b4466367a", 00:17:55.794 "is_configured": true, 00:17:55.794 "data_offset": 2048, 00:17:55.794 "data_size": 63488 00:17:55.794 } 00:17:55.794 ] 00:17:55.794 }' 00:17:55.794 07:18:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:55.794 07:18:29 -- common/autotest_common.sh@10 -- # set +x 00:17:56.362 07:18:30 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:56.362 07:18:30 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:17:56.621 [2024-02-13 07:18:30.291005] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.621 07:18:30 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=31c0c881-eb54-48b1-85d7-686a15d10cec 00:17:56.621 07:18:30 -- bdev/bdev_raid.sh@380 -- # '[' -z 31c0c881-eb54-48b1-85d7-686a15d10cec ']' 00:17:56.621 07:18:30 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:56.882 [2024-02-13 07:18:30.542774] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.882 [2024-02-13 07:18:30.543029] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.882 [2024-02-13 07:18:30.543232] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.882 [2024-02-13 07:18:30.543435] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.882 [2024-02-13 07:18:30.543534] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:17:56.882 07:18:30 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.882 07:18:30 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:17:57.141 07:18:30 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:17:57.141 07:18:30 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:17:57.141 07:18:30 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.141 07:18:30 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:57.400 07:18:30 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.400 07:18:30 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:57.658 07:18:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.658 07:18:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:17:57.918 07:18:31 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:17:57.918 07:18:31 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:17:58.177 07:18:31 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:58.177 07:18:31 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:58.177 07:18:31 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:17:58.177 07:18:31 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:58.177 07:18:31 -- common/autotest_common.sh@638 -- # local es=0 00:17:58.177 07:18:31 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:58.177 07:18:31 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.177 07:18:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:58.177 07:18:31 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.177 07:18:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:58.177 07:18:31 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.177 07:18:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:58.177 07:18:31 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.177 07:18:31 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:58.177 07:18:31 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:17:58.436 [2024-02-13 07:18:32.062993] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:58.436 [2024-02-13 07:18:32.064953] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:58.436 [2024-02-13 07:18:32.065174] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:58.436 [2024-02-13 07:18:32.065278] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:58.436 [2024-02-13 07:18:32.065431] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:17:58.436 [2024-02-13 07:18:32.066231] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:17:58.436 [2024-02-13 07:18:32.066539] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:17:58.436 [2024-02-13 07:18:32.066885] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:17:58.436 [2024-02-13 07:18:32.067181] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.436 [2024-02-13 07:18:32.067314] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:17:58.436 request: 00:17:58.436 { 00:17:58.436 "name": "raid_bdev1", 00:17:58.436 "raid_level": "raid0", 00:17:58.436 "base_bdevs": [ 00:17:58.436 "malloc1", 00:17:58.436 "malloc2", 00:17:58.436 "malloc3", 00:17:58.436 "malloc4" 00:17:58.436 ], 00:17:58.436 "superblock": false, 00:17:58.436 "strip_size_kb": 64, 00:17:58.436 "method": "bdev_raid_create", 00:17:58.437 "req_id": 1 00:17:58.437 } 00:17:58.437 Got JSON-RPC error response 00:17:58.437 response: 00:17:58.437 { 00:17:58.437 "code": -17, 00:17:58.437 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:58.437 } 00:17:58.437 07:18:32 -- common/autotest_common.sh@641 -- # es=1 00:17:58.437 07:18:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:58.437 07:18:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:58.437 07:18:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:58.437 07:18:32 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.437 07:18:32 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:17:58.695 07:18:32 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:17:58.695 07:18:32 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:17:58.695 07:18:32 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:58.954 [2024-02-13 07:18:32.523653] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:58.954 [2024-02-13 07:18:32.524016] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.954 [2024-02-13 07:18:32.524259] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:58.954 [2024-02-13 07:18:32.524521] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.954 [2024-02-13 07:18:32.527266] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.954 [2024-02-13 07:18:32.527577] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:58.954 [2024-02-13 07:18:32.527924] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:17:58.954 [2024-02-13 07:18:32.528134] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:58.954 pt1 00:17:58.954 07:18:32 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:17:58.954 07:18:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:17:58.954 07:18:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:17:58.954 07:18:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:17:58.954 07:18:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:17:58.954 07:18:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:17:58.954 07:18:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:17:58.954 07:18:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:17:58.954 07:18:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:17:58.954 07:18:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:17:58.954 07:18:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.954 07:18:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.213 07:18:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:17:59.213 "name": "raid_bdev1", 00:17:59.213 "uuid": "31c0c881-eb54-48b1-85d7-686a15d10cec", 00:17:59.213 "strip_size_kb": 64, 00:17:59.213 "state": "configuring", 00:17:59.213 "raid_level": "raid0", 00:17:59.213 "superblock": true, 00:17:59.213 "num_base_bdevs": 4, 00:17:59.213 "num_base_bdevs_discovered": 1, 00:17:59.213 "num_base_bdevs_operational": 4, 00:17:59.213 "base_bdevs_list": [ 00:17:59.213 { 00:17:59.213 "name": "pt1", 00:17:59.213 "uuid": "0e10d21c-d3bd-53d3-8889-cd434b35bbae", 00:17:59.213 "is_configured": true, 00:17:59.213 "data_offset": 2048, 00:17:59.213 "data_size": 63488 00:17:59.213 }, 00:17:59.213 { 00:17:59.213 "name": null, 00:17:59.213 "uuid": "6c57b6c3-c5aa-5b00-8e9d-bfd660097702", 00:17:59.213 "is_configured": false, 00:17:59.213 "data_offset": 2048, 00:17:59.213 "data_size": 63488 00:17:59.213 }, 00:17:59.213 { 00:17:59.213 "name": null, 00:17:59.213 "uuid": "b0a293ad-f9eb-5112-8db9-9dc99a0947b5", 00:17:59.213 "is_configured": false, 00:17:59.213 "data_offset": 2048, 00:17:59.213 "data_size": 63488 00:17:59.213 }, 00:17:59.213 { 00:17:59.213 "name": null, 00:17:59.213 "uuid": "8fb3f9c3-fa83-581a-9966-8f7b4466367a", 00:17:59.213 "is_configured": false, 00:17:59.213 "data_offset": 2048, 00:17:59.213 "data_size": 63488 00:17:59.213 } 00:17:59.213 ] 00:17:59.213 }' 00:17:59.213 07:18:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:17:59.213 07:18:32 -- common/autotest_common.sh@10 -- # set +x 00:17:59.782 07:18:33 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:17:59.782 07:18:33 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:00.040 [2024-02-13 07:18:33.644297] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:00.040 [2024-02-13 07:18:33.644602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.040 [2024-02-13 07:18:33.644690] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:00.040 [2024-02-13 07:18:33.644957] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.040 [2024-02-13 07:18:33.645588] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.040 [2024-02-13 07:18:33.645789] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:00.040 [2024-02-13 07:18:33.646004] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:00.040 [2024-02-13 07:18:33.646131] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:00.040 pt2 00:18:00.040 07:18:33 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:00.299 [2024-02-13 07:18:33.892280] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:00.299 07:18:33 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:18:00.299 07:18:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:00.299 07:18:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:00.299 07:18:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:00.299 07:18:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:00.299 07:18:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:00.299 07:18:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:00.300 07:18:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:00.300 07:18:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:00.300 07:18:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:00.300 07:18:33 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.300 07:18:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.559 07:18:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:00.559 "name": "raid_bdev1", 00:18:00.559 "uuid": "31c0c881-eb54-48b1-85d7-686a15d10cec", 00:18:00.559 "strip_size_kb": 64, 00:18:00.559 "state": "configuring", 00:18:00.559 "raid_level": "raid0", 00:18:00.559 "superblock": true, 00:18:00.559 "num_base_bdevs": 4, 00:18:00.559 "num_base_bdevs_discovered": 1, 00:18:00.559 "num_base_bdevs_operational": 4, 00:18:00.559 "base_bdevs_list": [ 00:18:00.559 { 00:18:00.559 "name": "pt1", 00:18:00.559 "uuid": "0e10d21c-d3bd-53d3-8889-cd434b35bbae", 00:18:00.559 "is_configured": true, 00:18:00.559 "data_offset": 2048, 00:18:00.559 "data_size": 63488 00:18:00.559 }, 00:18:00.559 { 00:18:00.559 "name": null, 00:18:00.559 "uuid": "6c57b6c3-c5aa-5b00-8e9d-bfd660097702", 00:18:00.559 "is_configured": false, 00:18:00.559 "data_offset": 2048, 00:18:00.559 "data_size": 63488 00:18:00.559 }, 00:18:00.559 { 00:18:00.559 "name": null, 00:18:00.559 "uuid": "b0a293ad-f9eb-5112-8db9-9dc99a0947b5", 00:18:00.559 "is_configured": false, 00:18:00.559 "data_offset": 2048, 00:18:00.559 "data_size": 63488 00:18:00.559 }, 00:18:00.559 { 00:18:00.559 "name": null, 00:18:00.559 "uuid": "8fb3f9c3-fa83-581a-9966-8f7b4466367a", 00:18:00.559 "is_configured": false, 00:18:00.559 "data_offset": 2048, 00:18:00.559 "data_size": 63488 00:18:00.559 } 00:18:00.559 ] 00:18:00.559 }' 00:18:00.559 07:18:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:00.559 07:18:34 -- common/autotest_common.sh@10 -- # set +x 00:18:01.496 07:18:34 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:01.496 07:18:34 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:01.496 07:18:34 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:01.496 [2024-02-13 07:18:35.008573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:01.496 [2024-02-13 07:18:35.008895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.496 [2024-02-13 07:18:35.008978] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:01.496 [2024-02-13 07:18:35.009222] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.496 [2024-02-13 07:18:35.009821] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.496 [2024-02-13 07:18:35.009994] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:01.496 [2024-02-13 07:18:35.010203] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:01.496 [2024-02-13 07:18:35.010325] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:01.496 pt2 00:18:01.496 07:18:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:01.496 07:18:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:01.496 07:18:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:01.755 [2024-02-13 07:18:35.208554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:01.755 [2024-02-13 07:18:35.208759] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.755 [2024-02-13 07:18:35.208817] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:01.755 [2024-02-13 07:18:35.208929] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.755 [2024-02-13 07:18:35.209382] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.755 [2024-02-13 07:18:35.209573] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:01.755 [2024-02-13 07:18:35.209753] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:01.755 [2024-02-13 07:18:35.209857] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:01.755 pt3 00:18:01.755 07:18:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:01.755 07:18:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:01.755 07:18:35 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:02.014 [2024-02-13 07:18:35.472609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:02.014 [2024-02-13 07:18:35.472827] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.014 [2024-02-13 07:18:35.472892] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:02.014 [2024-02-13 07:18:35.473004] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.014 [2024-02-13 07:18:35.473494] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.014 [2024-02-13 07:18:35.473680] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:02.014 [2024-02-13 07:18:35.473867] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:02.015 [2024-02-13 07:18:35.473982] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:02.015 [2024-02-13 07:18:35.474162] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:18:02.015 [2024-02-13 07:18:35.474280] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:02.015 [2024-02-13 07:18:35.474427] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:02.015 [2024-02-13 07:18:35.475007] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:18:02.015 [2024-02-13 07:18:35.475134] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:18:02.015 [2024-02-13 07:18:35.475355] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.015 pt4 00:18:02.015 07:18:35 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:02.015 07:18:35 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:02.015 07:18:35 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:18:02.015 07:18:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:02.015 07:18:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:02.015 07:18:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:18:02.015 07:18:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:02.015 07:18:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:02.015 07:18:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:02.015 07:18:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:02.015 07:18:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:02.015 07:18:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:02.015 07:18:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.015 07:18:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.015 07:18:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:02.015 "name": "raid_bdev1", 00:18:02.015 "uuid": "31c0c881-eb54-48b1-85d7-686a15d10cec", 00:18:02.015 "strip_size_kb": 64, 00:18:02.015 "state": "online", 00:18:02.015 "raid_level": "raid0", 00:18:02.015 "superblock": true, 00:18:02.015 "num_base_bdevs": 4, 00:18:02.015 "num_base_bdevs_discovered": 4, 00:18:02.015 "num_base_bdevs_operational": 4, 00:18:02.015 "base_bdevs_list": [ 00:18:02.015 { 00:18:02.015 "name": "pt1", 00:18:02.015 "uuid": "0e10d21c-d3bd-53d3-8889-cd434b35bbae", 00:18:02.015 "is_configured": true, 00:18:02.015 "data_offset": 2048, 00:18:02.015 "data_size": 63488 00:18:02.015 }, 00:18:02.015 { 00:18:02.015 "name": "pt2", 00:18:02.015 "uuid": "6c57b6c3-c5aa-5b00-8e9d-bfd660097702", 00:18:02.015 "is_configured": true, 00:18:02.015 "data_offset": 2048, 00:18:02.015 "data_size": 63488 00:18:02.015 }, 00:18:02.015 { 00:18:02.015 "name": "pt3", 00:18:02.015 "uuid": "b0a293ad-f9eb-5112-8db9-9dc99a0947b5", 00:18:02.015 "is_configured": true, 00:18:02.015 "data_offset": 2048, 00:18:02.015 "data_size": 63488 00:18:02.015 }, 00:18:02.015 { 00:18:02.015 "name": "pt4", 00:18:02.015 "uuid": "8fb3f9c3-fa83-581a-9966-8f7b4466367a", 00:18:02.015 "is_configured": true, 00:18:02.015 "data_offset": 2048, 00:18:02.015 "data_size": 63488 00:18:02.015 } 00:18:02.015 ] 00:18:02.015 }' 00:18:02.015 07:18:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:02.015 07:18:35 -- common/autotest_common.sh@10 -- # set +x 00:18:02.954 07:18:36 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:02.954 07:18:36 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:02.954 [2024-02-13 07:18:36.525794] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:02.954 07:18:36 -- bdev/bdev_raid.sh@430 -- # '[' 31c0c881-eb54-48b1-85d7-686a15d10cec '!=' 31c0c881-eb54-48b1-85d7-686a15d10cec ']' 00:18:02.954 07:18:36 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:18:02.954 07:18:36 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:02.954 07:18:36 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:02.954 07:18:36 -- bdev/bdev_raid.sh@511 -- # killprocess 123942 00:18:02.954 07:18:36 -- common/autotest_common.sh@924 -- # '[' -z 123942 ']' 00:18:02.954 07:18:36 -- common/autotest_common.sh@928 -- # kill -0 123942 00:18:02.954 07:18:36 -- common/autotest_common.sh@929 -- # uname 00:18:02.954 07:18:36 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:02.954 07:18:36 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 123942 00:18:02.954 killing process with pid 123942 00:18:02.954 07:18:36 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:18:02.954 07:18:36 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:18:02.954 07:18:36 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 123942' 00:18:02.954 07:18:36 -- common/autotest_common.sh@943 -- # kill 123942 00:18:02.954 07:18:36 -- common/autotest_common.sh@948 -- # wait 123942 00:18:02.954 [2024-02-13 07:18:36.560067] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:02.954 [2024-02-13 07:18:36.560154] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:02.954 [2024-02-13 07:18:36.560263] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:02.954 [2024-02-13 07:18:36.560274] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:18:03.213 [2024-02-13 07:18:36.862569] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:04.591 ************************************ 00:18:04.591 END TEST raid_superblock_test 00:18:04.591 ************************************ 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:04.591 00:18:04.591 real 0m11.796s 00:18:04.591 user 0m20.455s 00:18:04.591 sys 0m1.546s 00:18:04.591 07:18:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:04.591 07:18:37 -- common/autotest_common.sh@10 -- # set +x 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:18:04.591 07:18:37 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:18:04.591 07:18:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:18:04.591 07:18:37 -- common/autotest_common.sh@10 -- # set +x 00:18:04.591 ************************************ 00:18:04.591 START TEST raid_state_function_test 00:18:04.591 ************************************ 00:18:04.591 07:18:37 -- common/autotest_common.sh@1102 -- # raid_state_function_test concat 4 false 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@226 -- # raid_pid=124285 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124285' 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:04.591 Process raid pid: 124285 00:18:04.591 07:18:37 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124285 /var/tmp/spdk-raid.sock 00:18:04.591 07:18:37 -- common/autotest_common.sh@817 -- # '[' -z 124285 ']' 00:18:04.592 07:18:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:04.592 07:18:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:04.592 07:18:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:04.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:04.592 07:18:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:04.592 07:18:37 -- common/autotest_common.sh@10 -- # set +x 00:18:04.592 [2024-02-13 07:18:38.050062] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:18:04.592 [2024-02-13 07:18:38.050440] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.592 [2024-02-13 07:18:38.222599] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.851 [2024-02-13 07:18:38.414561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.109 [2024-02-13 07:18:38.594558] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:05.368 07:18:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:05.368 07:18:38 -- common/autotest_common.sh@850 -- # return 0 00:18:05.368 07:18:38 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:05.628 [2024-02-13 07:18:39.200512] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:05.628 [2024-02-13 07:18:39.200770] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:05.628 [2024-02-13 07:18:39.200899] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:05.628 [2024-02-13 07:18:39.200959] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:05.628 [2024-02-13 07:18:39.201047] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:05.628 [2024-02-13 07:18:39.201179] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:05.628 [2024-02-13 07:18:39.201445] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:05.628 [2024-02-13 07:18:39.201520] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:05.628 07:18:39 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:05.628 07:18:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:05.628 07:18:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:05.628 07:18:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:05.628 07:18:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:05.628 07:18:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:05.628 07:18:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:05.628 07:18:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:05.628 07:18:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:05.628 07:18:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:05.628 07:18:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.628 07:18:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:05.887 07:18:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:05.887 "name": "Existed_Raid", 00:18:05.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.887 "strip_size_kb": 64, 00:18:05.887 "state": "configuring", 00:18:05.887 "raid_level": "concat", 00:18:05.887 "superblock": false, 00:18:05.887 "num_base_bdevs": 4, 00:18:05.887 "num_base_bdevs_discovered": 0, 00:18:05.887 "num_base_bdevs_operational": 4, 00:18:05.887 "base_bdevs_list": [ 00:18:05.887 { 00:18:05.887 "name": "BaseBdev1", 00:18:05.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.887 "is_configured": false, 00:18:05.887 "data_offset": 0, 00:18:05.887 "data_size": 0 00:18:05.887 }, 00:18:05.887 { 00:18:05.887 "name": "BaseBdev2", 00:18:05.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.887 "is_configured": false, 00:18:05.887 "data_offset": 0, 00:18:05.887 "data_size": 0 00:18:05.887 }, 00:18:05.887 { 00:18:05.887 "name": "BaseBdev3", 00:18:05.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.887 "is_configured": false, 00:18:05.887 "data_offset": 0, 00:18:05.887 "data_size": 0 00:18:05.887 }, 00:18:05.887 { 00:18:05.887 "name": "BaseBdev4", 00:18:05.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.887 "is_configured": false, 00:18:05.887 "data_offset": 0, 00:18:05.887 "data_size": 0 00:18:05.887 } 00:18:05.887 ] 00:18:05.887 }' 00:18:05.887 07:18:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:05.887 07:18:39 -- common/autotest_common.sh@10 -- # set +x 00:18:06.825 07:18:40 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:06.825 [2024-02-13 07:18:40.424653] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:06.825 [2024-02-13 07:18:40.424869] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:06.825 07:18:40 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:07.084 [2024-02-13 07:18:40.676685] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:07.084 [2024-02-13 07:18:40.676896] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:07.084 [2024-02-13 07:18:40.677046] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:07.084 [2024-02-13 07:18:40.677205] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:07.084 [2024-02-13 07:18:40.677304] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:07.084 [2024-02-13 07:18:40.677388] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:07.084 [2024-02-13 07:18:40.677639] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:07.084 [2024-02-13 07:18:40.677694] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:07.084 07:18:40 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:07.343 [2024-02-13 07:18:40.897266] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:07.343 BaseBdev1 00:18:07.343 07:18:40 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:07.343 07:18:40 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:07.343 07:18:40 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:07.343 07:18:40 -- common/autotest_common.sh@887 -- # local i 00:18:07.343 07:18:40 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:07.343 07:18:40 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:07.343 07:18:40 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:07.602 07:18:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:07.862 [ 00:18:07.862 { 00:18:07.862 "name": "BaseBdev1", 00:18:07.862 "aliases": [ 00:18:07.862 "8e66c0b9-7b3e-4427-a623-8a2d95b1b016" 00:18:07.862 ], 00:18:07.862 "product_name": "Malloc disk", 00:18:07.862 "block_size": 512, 00:18:07.862 "num_blocks": 65536, 00:18:07.862 "uuid": "8e66c0b9-7b3e-4427-a623-8a2d95b1b016", 00:18:07.862 "assigned_rate_limits": { 00:18:07.862 "rw_ios_per_sec": 0, 00:18:07.862 "rw_mbytes_per_sec": 0, 00:18:07.862 "r_mbytes_per_sec": 0, 00:18:07.862 "w_mbytes_per_sec": 0 00:18:07.862 }, 00:18:07.862 "claimed": true, 00:18:07.862 "claim_type": "exclusive_write", 00:18:07.862 "zoned": false, 00:18:07.862 "supported_io_types": { 00:18:07.862 "read": true, 00:18:07.862 "write": true, 00:18:07.862 "unmap": true, 00:18:07.862 "write_zeroes": true, 00:18:07.862 "flush": true, 00:18:07.862 "reset": true, 00:18:07.862 "compare": false, 00:18:07.862 "compare_and_write": false, 00:18:07.862 "abort": true, 00:18:07.862 "nvme_admin": false, 00:18:07.862 "nvme_io": false 00:18:07.862 }, 00:18:07.862 "memory_domains": [ 00:18:07.862 { 00:18:07.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:07.862 "dma_device_type": 2 00:18:07.862 } 00:18:07.862 ], 00:18:07.862 "driver_specific": {} 00:18:07.862 } 00:18:07.862 ] 00:18:07.862 07:18:41 -- common/autotest_common.sh@893 -- # return 0 00:18:07.862 07:18:41 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:07.862 07:18:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:07.862 07:18:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:07.862 07:18:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:07.862 07:18:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:07.862 07:18:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:07.862 07:18:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:07.862 07:18:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:07.862 07:18:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:07.862 07:18:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:07.862 07:18:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.862 07:18:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.121 07:18:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:08.121 "name": "Existed_Raid", 00:18:08.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.121 "strip_size_kb": 64, 00:18:08.121 "state": "configuring", 00:18:08.121 "raid_level": "concat", 00:18:08.121 "superblock": false, 00:18:08.121 "num_base_bdevs": 4, 00:18:08.121 "num_base_bdevs_discovered": 1, 00:18:08.121 "num_base_bdevs_operational": 4, 00:18:08.121 "base_bdevs_list": [ 00:18:08.121 { 00:18:08.121 "name": "BaseBdev1", 00:18:08.121 "uuid": "8e66c0b9-7b3e-4427-a623-8a2d95b1b016", 00:18:08.121 "is_configured": true, 00:18:08.121 "data_offset": 0, 00:18:08.121 "data_size": 65536 00:18:08.121 }, 00:18:08.121 { 00:18:08.121 "name": "BaseBdev2", 00:18:08.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.121 "is_configured": false, 00:18:08.121 "data_offset": 0, 00:18:08.121 "data_size": 0 00:18:08.121 }, 00:18:08.121 { 00:18:08.121 "name": "BaseBdev3", 00:18:08.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.121 "is_configured": false, 00:18:08.121 "data_offset": 0, 00:18:08.121 "data_size": 0 00:18:08.121 }, 00:18:08.121 { 00:18:08.121 "name": "BaseBdev4", 00:18:08.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.121 "is_configured": false, 00:18:08.121 "data_offset": 0, 00:18:08.121 "data_size": 0 00:18:08.121 } 00:18:08.121 ] 00:18:08.121 }' 00:18:08.122 07:18:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:08.122 07:18:41 -- common/autotest_common.sh@10 -- # set +x 00:18:08.692 07:18:42 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:08.952 [2024-02-13 07:18:42.393720] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:08.952 [2024-02-13 07:18:42.393963] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:08.952 07:18:42 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:08.952 07:18:42 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:08.952 [2024-02-13 07:18:42.601810] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:08.952 [2024-02-13 07:18:42.604070] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:08.952 [2024-02-13 07:18:42.604287] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:08.952 [2024-02-13 07:18:42.604443] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:08.952 [2024-02-13 07:18:42.604575] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:08.952 [2024-02-13 07:18:42.604678] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:08.952 [2024-02-13 07:18:42.604756] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:08.952 07:18:42 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:08.952 07:18:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:08.952 07:18:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:08.952 07:18:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:08.952 07:18:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:08.952 07:18:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:08.952 07:18:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:08.952 07:18:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:08.952 07:18:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:08.952 07:18:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:08.952 07:18:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:08.952 07:18:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:08.952 07:18:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:08.952 07:18:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.211 07:18:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:09.211 "name": "Existed_Raid", 00:18:09.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.211 "strip_size_kb": 64, 00:18:09.211 "state": "configuring", 00:18:09.211 "raid_level": "concat", 00:18:09.211 "superblock": false, 00:18:09.211 "num_base_bdevs": 4, 00:18:09.211 "num_base_bdevs_discovered": 1, 00:18:09.211 "num_base_bdevs_operational": 4, 00:18:09.211 "base_bdevs_list": [ 00:18:09.211 { 00:18:09.211 "name": "BaseBdev1", 00:18:09.211 "uuid": "8e66c0b9-7b3e-4427-a623-8a2d95b1b016", 00:18:09.211 "is_configured": true, 00:18:09.211 "data_offset": 0, 00:18:09.211 "data_size": 65536 00:18:09.211 }, 00:18:09.211 { 00:18:09.211 "name": "BaseBdev2", 00:18:09.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.211 "is_configured": false, 00:18:09.211 "data_offset": 0, 00:18:09.211 "data_size": 0 00:18:09.211 }, 00:18:09.211 { 00:18:09.211 "name": "BaseBdev3", 00:18:09.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.211 "is_configured": false, 00:18:09.212 "data_offset": 0, 00:18:09.212 "data_size": 0 00:18:09.212 }, 00:18:09.212 { 00:18:09.212 "name": "BaseBdev4", 00:18:09.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.212 "is_configured": false, 00:18:09.212 "data_offset": 0, 00:18:09.212 "data_size": 0 00:18:09.212 } 00:18:09.212 ] 00:18:09.212 }' 00:18:09.212 07:18:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:09.212 07:18:42 -- common/autotest_common.sh@10 -- # set +x 00:18:10.147 07:18:43 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:10.406 [2024-02-13 07:18:43.850334] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.406 BaseBdev2 00:18:10.406 07:18:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:10.406 07:18:43 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:18:10.406 07:18:43 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:10.406 07:18:43 -- common/autotest_common.sh@887 -- # local i 00:18:10.406 07:18:43 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:10.406 07:18:43 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:10.406 07:18:43 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:10.665 07:18:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:10.665 [ 00:18:10.665 { 00:18:10.665 "name": "BaseBdev2", 00:18:10.665 "aliases": [ 00:18:10.665 "a42b2eca-842f-4f62-8a28-8a48d252ea30" 00:18:10.665 ], 00:18:10.665 "product_name": "Malloc disk", 00:18:10.665 "block_size": 512, 00:18:10.665 "num_blocks": 65536, 00:18:10.665 "uuid": "a42b2eca-842f-4f62-8a28-8a48d252ea30", 00:18:10.665 "assigned_rate_limits": { 00:18:10.665 "rw_ios_per_sec": 0, 00:18:10.665 "rw_mbytes_per_sec": 0, 00:18:10.665 "r_mbytes_per_sec": 0, 00:18:10.665 "w_mbytes_per_sec": 0 00:18:10.665 }, 00:18:10.665 "claimed": true, 00:18:10.665 "claim_type": "exclusive_write", 00:18:10.665 "zoned": false, 00:18:10.665 "supported_io_types": { 00:18:10.665 "read": true, 00:18:10.665 "write": true, 00:18:10.665 "unmap": true, 00:18:10.665 "write_zeroes": true, 00:18:10.665 "flush": true, 00:18:10.665 "reset": true, 00:18:10.665 "compare": false, 00:18:10.665 "compare_and_write": false, 00:18:10.665 "abort": true, 00:18:10.665 "nvme_admin": false, 00:18:10.665 "nvme_io": false 00:18:10.665 }, 00:18:10.665 "memory_domains": [ 00:18:10.665 { 00:18:10.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.665 "dma_device_type": 2 00:18:10.665 } 00:18:10.665 ], 00:18:10.665 "driver_specific": {} 00:18:10.665 } 00:18:10.665 ] 00:18:10.665 07:18:44 -- common/autotest_common.sh@893 -- # return 0 00:18:10.665 07:18:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:10.665 07:18:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:10.665 07:18:44 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:10.665 07:18:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:10.665 07:18:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:10.665 07:18:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:10.665 07:18:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:10.665 07:18:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:10.665 07:18:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:10.665 07:18:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:10.665 07:18:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:10.665 07:18:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:10.665 07:18:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:10.665 07:18:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.924 07:18:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:10.924 "name": "Existed_Raid", 00:18:10.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.924 "strip_size_kb": 64, 00:18:10.924 "state": "configuring", 00:18:10.924 "raid_level": "concat", 00:18:10.924 "superblock": false, 00:18:10.924 "num_base_bdevs": 4, 00:18:10.924 "num_base_bdevs_discovered": 2, 00:18:10.924 "num_base_bdevs_operational": 4, 00:18:10.924 "base_bdevs_list": [ 00:18:10.924 { 00:18:10.924 "name": "BaseBdev1", 00:18:10.924 "uuid": "8e66c0b9-7b3e-4427-a623-8a2d95b1b016", 00:18:10.924 "is_configured": true, 00:18:10.924 "data_offset": 0, 00:18:10.924 "data_size": 65536 00:18:10.924 }, 00:18:10.924 { 00:18:10.924 "name": "BaseBdev2", 00:18:10.924 "uuid": "a42b2eca-842f-4f62-8a28-8a48d252ea30", 00:18:10.924 "is_configured": true, 00:18:10.924 "data_offset": 0, 00:18:10.924 "data_size": 65536 00:18:10.924 }, 00:18:10.924 { 00:18:10.924 "name": "BaseBdev3", 00:18:10.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.924 "is_configured": false, 00:18:10.924 "data_offset": 0, 00:18:10.924 "data_size": 0 00:18:10.924 }, 00:18:10.924 { 00:18:10.924 "name": "BaseBdev4", 00:18:10.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.924 "is_configured": false, 00:18:10.924 "data_offset": 0, 00:18:10.924 "data_size": 0 00:18:10.924 } 00:18:10.924 ] 00:18:10.924 }' 00:18:10.924 07:18:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:10.924 07:18:44 -- common/autotest_common.sh@10 -- # set +x 00:18:11.878 07:18:45 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:11.878 [2024-02-13 07:18:45.430578] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:11.878 BaseBdev3 00:18:11.878 07:18:45 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:11.878 07:18:45 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:18:11.878 07:18:45 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:11.878 07:18:45 -- common/autotest_common.sh@887 -- # local i 00:18:11.878 07:18:45 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:11.878 07:18:45 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:11.878 07:18:45 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:12.137 07:18:45 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:12.395 [ 00:18:12.395 { 00:18:12.395 "name": "BaseBdev3", 00:18:12.395 "aliases": [ 00:18:12.395 "c1fe494a-acfb-4913-ba28-9f0f93a10698" 00:18:12.395 ], 00:18:12.395 "product_name": "Malloc disk", 00:18:12.395 "block_size": 512, 00:18:12.395 "num_blocks": 65536, 00:18:12.395 "uuid": "c1fe494a-acfb-4913-ba28-9f0f93a10698", 00:18:12.395 "assigned_rate_limits": { 00:18:12.395 "rw_ios_per_sec": 0, 00:18:12.395 "rw_mbytes_per_sec": 0, 00:18:12.395 "r_mbytes_per_sec": 0, 00:18:12.395 "w_mbytes_per_sec": 0 00:18:12.395 }, 00:18:12.395 "claimed": true, 00:18:12.395 "claim_type": "exclusive_write", 00:18:12.395 "zoned": false, 00:18:12.395 "supported_io_types": { 00:18:12.395 "read": true, 00:18:12.395 "write": true, 00:18:12.395 "unmap": true, 00:18:12.395 "write_zeroes": true, 00:18:12.395 "flush": true, 00:18:12.395 "reset": true, 00:18:12.395 "compare": false, 00:18:12.395 "compare_and_write": false, 00:18:12.395 "abort": true, 00:18:12.395 "nvme_admin": false, 00:18:12.395 "nvme_io": false 00:18:12.395 }, 00:18:12.395 "memory_domains": [ 00:18:12.395 { 00:18:12.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.395 "dma_device_type": 2 00:18:12.395 } 00:18:12.395 ], 00:18:12.395 "driver_specific": {} 00:18:12.395 } 00:18:12.395 ] 00:18:12.395 07:18:45 -- common/autotest_common.sh@893 -- # return 0 00:18:12.395 07:18:45 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:12.395 07:18:45 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:12.395 07:18:45 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:12.395 07:18:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:12.395 07:18:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:12.395 07:18:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:12.395 07:18:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:12.395 07:18:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:12.395 07:18:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:12.395 07:18:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:12.395 07:18:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:12.395 07:18:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:12.395 07:18:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.395 07:18:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.653 07:18:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:12.653 "name": "Existed_Raid", 00:18:12.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.653 "strip_size_kb": 64, 00:18:12.653 "state": "configuring", 00:18:12.653 "raid_level": "concat", 00:18:12.653 "superblock": false, 00:18:12.653 "num_base_bdevs": 4, 00:18:12.653 "num_base_bdevs_discovered": 3, 00:18:12.653 "num_base_bdevs_operational": 4, 00:18:12.654 "base_bdevs_list": [ 00:18:12.654 { 00:18:12.654 "name": "BaseBdev1", 00:18:12.654 "uuid": "8e66c0b9-7b3e-4427-a623-8a2d95b1b016", 00:18:12.654 "is_configured": true, 00:18:12.654 "data_offset": 0, 00:18:12.654 "data_size": 65536 00:18:12.654 }, 00:18:12.654 { 00:18:12.654 "name": "BaseBdev2", 00:18:12.654 "uuid": "a42b2eca-842f-4f62-8a28-8a48d252ea30", 00:18:12.654 "is_configured": true, 00:18:12.654 "data_offset": 0, 00:18:12.654 "data_size": 65536 00:18:12.654 }, 00:18:12.654 { 00:18:12.654 "name": "BaseBdev3", 00:18:12.654 "uuid": "c1fe494a-acfb-4913-ba28-9f0f93a10698", 00:18:12.654 "is_configured": true, 00:18:12.654 "data_offset": 0, 00:18:12.654 "data_size": 65536 00:18:12.654 }, 00:18:12.654 { 00:18:12.654 "name": "BaseBdev4", 00:18:12.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.654 "is_configured": false, 00:18:12.654 "data_offset": 0, 00:18:12.654 "data_size": 0 00:18:12.654 } 00:18:12.654 ] 00:18:12.654 }' 00:18:12.654 07:18:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:12.654 07:18:46 -- common/autotest_common.sh@10 -- # set +x 00:18:13.220 07:18:46 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:13.479 [2024-02-13 07:18:47.127714] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:13.479 [2024-02-13 07:18:47.128039] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:18:13.479 [2024-02-13 07:18:47.128082] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:18:13.479 [2024-02-13 07:18:47.128306] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:13.479 [2024-02-13 07:18:47.128868] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:18:13.479 [2024-02-13 07:18:47.129007] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:18:13.479 [2024-02-13 07:18:47.129438] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.479 BaseBdev4 00:18:13.479 07:18:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:13.479 07:18:47 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:18:13.479 07:18:47 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:13.479 07:18:47 -- common/autotest_common.sh@887 -- # local i 00:18:13.479 07:18:47 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:13.479 07:18:47 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:13.479 07:18:47 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:13.738 07:18:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:13.996 [ 00:18:13.996 { 00:18:13.996 "name": "BaseBdev4", 00:18:13.996 "aliases": [ 00:18:13.996 "fb9a44cb-7cfb-48d6-bf0e-525678b4cd03" 00:18:13.996 ], 00:18:13.996 "product_name": "Malloc disk", 00:18:13.996 "block_size": 512, 00:18:13.996 "num_blocks": 65536, 00:18:13.996 "uuid": "fb9a44cb-7cfb-48d6-bf0e-525678b4cd03", 00:18:13.996 "assigned_rate_limits": { 00:18:13.996 "rw_ios_per_sec": 0, 00:18:13.996 "rw_mbytes_per_sec": 0, 00:18:13.996 "r_mbytes_per_sec": 0, 00:18:13.996 "w_mbytes_per_sec": 0 00:18:13.996 }, 00:18:13.996 "claimed": true, 00:18:13.996 "claim_type": "exclusive_write", 00:18:13.996 "zoned": false, 00:18:13.996 "supported_io_types": { 00:18:13.996 "read": true, 00:18:13.996 "write": true, 00:18:13.996 "unmap": true, 00:18:13.996 "write_zeroes": true, 00:18:13.996 "flush": true, 00:18:13.996 "reset": true, 00:18:13.996 "compare": false, 00:18:13.996 "compare_and_write": false, 00:18:13.996 "abort": true, 00:18:13.996 "nvme_admin": false, 00:18:13.996 "nvme_io": false 00:18:13.996 }, 00:18:13.996 "memory_domains": [ 00:18:13.996 { 00:18:13.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:13.996 "dma_device_type": 2 00:18:13.996 } 00:18:13.996 ], 00:18:13.996 "driver_specific": {} 00:18:13.996 } 00:18:13.996 ] 00:18:13.996 07:18:47 -- common/autotest_common.sh@893 -- # return 0 00:18:13.996 07:18:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:13.996 07:18:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:13.996 07:18:47 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:13.996 07:18:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:13.996 07:18:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:13.996 07:18:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:13.996 07:18:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:13.996 07:18:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:13.996 07:18:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:13.996 07:18:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:13.996 07:18:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:13.996 07:18:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:13.996 07:18:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.996 07:18:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.256 07:18:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:14.256 "name": "Existed_Raid", 00:18:14.256 "uuid": "65466c07-f3ec-4ed5-92ab-371f3d49bc76", 00:18:14.256 "strip_size_kb": 64, 00:18:14.256 "state": "online", 00:18:14.256 "raid_level": "concat", 00:18:14.256 "superblock": false, 00:18:14.256 "num_base_bdevs": 4, 00:18:14.256 "num_base_bdevs_discovered": 4, 00:18:14.256 "num_base_bdevs_operational": 4, 00:18:14.256 "base_bdevs_list": [ 00:18:14.256 { 00:18:14.256 "name": "BaseBdev1", 00:18:14.256 "uuid": "8e66c0b9-7b3e-4427-a623-8a2d95b1b016", 00:18:14.256 "is_configured": true, 00:18:14.256 "data_offset": 0, 00:18:14.256 "data_size": 65536 00:18:14.256 }, 00:18:14.256 { 00:18:14.256 "name": "BaseBdev2", 00:18:14.256 "uuid": "a42b2eca-842f-4f62-8a28-8a48d252ea30", 00:18:14.256 "is_configured": true, 00:18:14.256 "data_offset": 0, 00:18:14.256 "data_size": 65536 00:18:14.256 }, 00:18:14.256 { 00:18:14.256 "name": "BaseBdev3", 00:18:14.256 "uuid": "c1fe494a-acfb-4913-ba28-9f0f93a10698", 00:18:14.256 "is_configured": true, 00:18:14.256 "data_offset": 0, 00:18:14.256 "data_size": 65536 00:18:14.256 }, 00:18:14.256 { 00:18:14.256 "name": "BaseBdev4", 00:18:14.256 "uuid": "fb9a44cb-7cfb-48d6-bf0e-525678b4cd03", 00:18:14.256 "is_configured": true, 00:18:14.256 "data_offset": 0, 00:18:14.256 "data_size": 65536 00:18:14.256 } 00:18:14.256 ] 00:18:14.256 }' 00:18:14.256 07:18:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:14.256 07:18:47 -- common/autotest_common.sh@10 -- # set +x 00:18:14.823 07:18:48 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:15.083 [2024-02-13 07:18:48.672303] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:15.083 [2024-02-13 07:18:48.672518] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:15.083 [2024-02-13 07:18:48.672707] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:15.083 07:18:48 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:15.083 07:18:48 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:15.083 07:18:48 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:15.083 07:18:48 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:15.083 07:18:48 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:15.083 07:18:48 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:15.083 07:18:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:15.083 07:18:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:15.083 07:18:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:15.083 07:18:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:15.083 07:18:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:15.083 07:18:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:15.083 07:18:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:15.083 07:18:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:15.083 07:18:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:15.083 07:18:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.083 07:18:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.342 07:18:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:15.342 "name": "Existed_Raid", 00:18:15.342 "uuid": "65466c07-f3ec-4ed5-92ab-371f3d49bc76", 00:18:15.342 "strip_size_kb": 64, 00:18:15.342 "state": "offline", 00:18:15.342 "raid_level": "concat", 00:18:15.342 "superblock": false, 00:18:15.342 "num_base_bdevs": 4, 00:18:15.342 "num_base_bdevs_discovered": 3, 00:18:15.342 "num_base_bdevs_operational": 3, 00:18:15.342 "base_bdevs_list": [ 00:18:15.342 { 00:18:15.342 "name": null, 00:18:15.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.342 "is_configured": false, 00:18:15.342 "data_offset": 0, 00:18:15.342 "data_size": 65536 00:18:15.342 }, 00:18:15.342 { 00:18:15.342 "name": "BaseBdev2", 00:18:15.342 "uuid": "a42b2eca-842f-4f62-8a28-8a48d252ea30", 00:18:15.342 "is_configured": true, 00:18:15.342 "data_offset": 0, 00:18:15.342 "data_size": 65536 00:18:15.342 }, 00:18:15.342 { 00:18:15.342 "name": "BaseBdev3", 00:18:15.342 "uuid": "c1fe494a-acfb-4913-ba28-9f0f93a10698", 00:18:15.342 "is_configured": true, 00:18:15.342 "data_offset": 0, 00:18:15.342 "data_size": 65536 00:18:15.342 }, 00:18:15.342 { 00:18:15.342 "name": "BaseBdev4", 00:18:15.342 "uuid": "fb9a44cb-7cfb-48d6-bf0e-525678b4cd03", 00:18:15.342 "is_configured": true, 00:18:15.342 "data_offset": 0, 00:18:15.342 "data_size": 65536 00:18:15.342 } 00:18:15.342 ] 00:18:15.342 }' 00:18:15.342 07:18:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:15.342 07:18:48 -- common/autotest_common.sh@10 -- # set +x 00:18:16.279 07:18:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:16.279 07:18:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:16.279 07:18:49 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.280 07:18:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:16.280 07:18:49 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:16.280 07:18:49 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:16.280 07:18:49 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:16.538 [2024-02-13 07:18:50.089020] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:16.538 07:18:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:16.538 07:18:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:16.538 07:18:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.538 07:18:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:16.797 07:18:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:16.797 07:18:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:16.797 07:18:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:17.056 [2024-02-13 07:18:50.614132] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:17.056 07:18:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:17.056 07:18:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:17.056 07:18:50 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.056 07:18:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:17.315 07:18:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:17.315 07:18:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:17.315 07:18:50 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:17.574 [2024-02-13 07:18:51.136176] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:17.574 [2024-02-13 07:18:51.136443] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:18:17.574 07:18:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:17.574 07:18:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:17.574 07:18:51 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.574 07:18:51 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:17.832 07:18:51 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:17.832 07:18:51 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:17.832 07:18:51 -- bdev/bdev_raid.sh@287 -- # killprocess 124285 00:18:17.832 07:18:51 -- common/autotest_common.sh@924 -- # '[' -z 124285 ']' 00:18:17.832 07:18:51 -- common/autotest_common.sh@928 -- # kill -0 124285 00:18:17.832 07:18:51 -- common/autotest_common.sh@929 -- # uname 00:18:17.832 07:18:51 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:17.832 07:18:51 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 124285 00:18:17.832 killing process with pid 124285 00:18:17.832 07:18:51 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:18:17.832 07:18:51 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:18:17.832 07:18:51 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 124285' 00:18:17.832 07:18:51 -- common/autotest_common.sh@943 -- # kill 124285 00:18:17.832 07:18:51 -- common/autotest_common.sh@948 -- # wait 124285 00:18:17.832 [2024-02-13 07:18:51.487863] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:17.832 [2024-02-13 07:18:51.488032] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:19.211 ************************************ 00:18:19.211 END TEST raid_state_function_test 00:18:19.211 ************************************ 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:19.211 00:18:19.211 real 0m14.550s 00:18:19.211 user 0m26.164s 00:18:19.211 sys 0m1.644s 00:18:19.211 07:18:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:19.211 07:18:52 -- common/autotest_common.sh@10 -- # set +x 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:18:19.211 07:18:52 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:18:19.211 07:18:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:18:19.211 07:18:52 -- common/autotest_common.sh@10 -- # set +x 00:18:19.211 ************************************ 00:18:19.211 START TEST raid_state_function_test_sb 00:18:19.211 ************************************ 00:18:19.211 07:18:52 -- common/autotest_common.sh@1102 -- # raid_state_function_test concat 4 true 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@226 -- # raid_pid=124753 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 124753' 00:18:19.211 Process raid pid: 124753 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:19.211 07:18:52 -- bdev/bdev_raid.sh@228 -- # waitforlisten 124753 /var/tmp/spdk-raid.sock 00:18:19.211 07:18:52 -- common/autotest_common.sh@817 -- # '[' -z 124753 ']' 00:18:19.211 07:18:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:19.211 07:18:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:19.211 07:18:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:19.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:19.211 07:18:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:19.211 07:18:52 -- common/autotest_common.sh@10 -- # set +x 00:18:19.211 [2024-02-13 07:18:52.656947] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:18:19.211 [2024-02-13 07:18:52.657464] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.211 [2024-02-13 07:18:52.829118] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.470 [2024-02-13 07:18:53.057857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.729 [2024-02-13 07:18:53.245260] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:19.987 07:18:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:19.987 07:18:53 -- common/autotest_common.sh@850 -- # return 0 00:18:19.987 07:18:53 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:20.246 [2024-02-13 07:18:53.856296] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:20.246 [2024-02-13 07:18:53.856586] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:20.246 [2024-02-13 07:18:53.856691] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:20.246 [2024-02-13 07:18:53.856830] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:20.246 [2024-02-13 07:18:53.856936] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:20.246 [2024-02-13 07:18:53.857008] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:20.246 [2024-02-13 07:18:53.857243] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:20.246 [2024-02-13 07:18:53.857299] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:20.246 07:18:53 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:20.246 07:18:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:20.246 07:18:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:20.246 07:18:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:20.246 07:18:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:20.246 07:18:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:20.246 07:18:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:20.247 07:18:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:20.247 07:18:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:20.247 07:18:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:20.247 07:18:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.247 07:18:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.505 07:18:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:20.505 "name": "Existed_Raid", 00:18:20.505 "uuid": "8557bcdf-aff6-46fb-aee9-a0e00d247ebb", 00:18:20.505 "strip_size_kb": 64, 00:18:20.505 "state": "configuring", 00:18:20.505 "raid_level": "concat", 00:18:20.505 "superblock": true, 00:18:20.505 "num_base_bdevs": 4, 00:18:20.505 "num_base_bdevs_discovered": 0, 00:18:20.505 "num_base_bdevs_operational": 4, 00:18:20.505 "base_bdevs_list": [ 00:18:20.505 { 00:18:20.505 "name": "BaseBdev1", 00:18:20.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.505 "is_configured": false, 00:18:20.505 "data_offset": 0, 00:18:20.505 "data_size": 0 00:18:20.505 }, 00:18:20.505 { 00:18:20.505 "name": "BaseBdev2", 00:18:20.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.505 "is_configured": false, 00:18:20.505 "data_offset": 0, 00:18:20.505 "data_size": 0 00:18:20.505 }, 00:18:20.505 { 00:18:20.505 "name": "BaseBdev3", 00:18:20.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.505 "is_configured": false, 00:18:20.505 "data_offset": 0, 00:18:20.506 "data_size": 0 00:18:20.506 }, 00:18:20.506 { 00:18:20.506 "name": "BaseBdev4", 00:18:20.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.506 "is_configured": false, 00:18:20.506 "data_offset": 0, 00:18:20.506 "data_size": 0 00:18:20.506 } 00:18:20.506 ] 00:18:20.506 }' 00:18:20.506 07:18:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:20.506 07:18:54 -- common/autotest_common.sh@10 -- # set +x 00:18:21.107 07:18:54 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:21.366 [2024-02-13 07:18:55.000358] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:21.366 [2024-02-13 07:18:55.000708] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:21.366 07:18:55 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:21.624 [2024-02-13 07:18:55.252592] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:21.624 [2024-02-13 07:18:55.252934] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:21.624 [2024-02-13 07:18:55.253099] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:21.624 [2024-02-13 07:18:55.253185] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:21.624 [2024-02-13 07:18:55.253348] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:21.624 [2024-02-13 07:18:55.253446] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:21.625 [2024-02-13 07:18:55.253610] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:21.625 [2024-02-13 07:18:55.253693] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:21.625 07:18:55 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:21.883 [2024-02-13 07:18:55.490500] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:21.883 BaseBdev1 00:18:21.883 07:18:55 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:21.883 07:18:55 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:21.883 07:18:55 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:21.883 07:18:55 -- common/autotest_common.sh@887 -- # local i 00:18:21.883 07:18:55 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:21.883 07:18:55 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:21.883 07:18:55 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:22.142 07:18:55 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:22.401 [ 00:18:22.401 { 00:18:22.401 "name": "BaseBdev1", 00:18:22.401 "aliases": [ 00:18:22.401 "098f60ad-2878-4c9a-8291-6dc8c48d13b8" 00:18:22.401 ], 00:18:22.401 "product_name": "Malloc disk", 00:18:22.401 "block_size": 512, 00:18:22.401 "num_blocks": 65536, 00:18:22.401 "uuid": "098f60ad-2878-4c9a-8291-6dc8c48d13b8", 00:18:22.401 "assigned_rate_limits": { 00:18:22.401 "rw_ios_per_sec": 0, 00:18:22.401 "rw_mbytes_per_sec": 0, 00:18:22.401 "r_mbytes_per_sec": 0, 00:18:22.401 "w_mbytes_per_sec": 0 00:18:22.401 }, 00:18:22.401 "claimed": true, 00:18:22.401 "claim_type": "exclusive_write", 00:18:22.401 "zoned": false, 00:18:22.401 "supported_io_types": { 00:18:22.401 "read": true, 00:18:22.401 "write": true, 00:18:22.401 "unmap": true, 00:18:22.401 "write_zeroes": true, 00:18:22.401 "flush": true, 00:18:22.401 "reset": true, 00:18:22.401 "compare": false, 00:18:22.401 "compare_and_write": false, 00:18:22.401 "abort": true, 00:18:22.401 "nvme_admin": false, 00:18:22.401 "nvme_io": false 00:18:22.401 }, 00:18:22.401 "memory_domains": [ 00:18:22.401 { 00:18:22.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.401 "dma_device_type": 2 00:18:22.401 } 00:18:22.401 ], 00:18:22.401 "driver_specific": {} 00:18:22.401 } 00:18:22.401 ] 00:18:22.401 07:18:55 -- common/autotest_common.sh@893 -- # return 0 00:18:22.401 07:18:55 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:22.401 07:18:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:22.401 07:18:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:22.401 07:18:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:22.401 07:18:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:22.401 07:18:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:22.401 07:18:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:22.401 07:18:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:22.401 07:18:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:22.401 07:18:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:22.401 07:18:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.401 07:18:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.659 07:18:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:22.659 "name": "Existed_Raid", 00:18:22.659 "uuid": "7e4d77a8-8798-4589-8070-b24aa44b945a", 00:18:22.659 "strip_size_kb": 64, 00:18:22.659 "state": "configuring", 00:18:22.659 "raid_level": "concat", 00:18:22.659 "superblock": true, 00:18:22.659 "num_base_bdevs": 4, 00:18:22.659 "num_base_bdevs_discovered": 1, 00:18:22.659 "num_base_bdevs_operational": 4, 00:18:22.659 "base_bdevs_list": [ 00:18:22.659 { 00:18:22.659 "name": "BaseBdev1", 00:18:22.659 "uuid": "098f60ad-2878-4c9a-8291-6dc8c48d13b8", 00:18:22.659 "is_configured": true, 00:18:22.659 "data_offset": 2048, 00:18:22.659 "data_size": 63488 00:18:22.659 }, 00:18:22.659 { 00:18:22.659 "name": "BaseBdev2", 00:18:22.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.659 "is_configured": false, 00:18:22.659 "data_offset": 0, 00:18:22.659 "data_size": 0 00:18:22.659 }, 00:18:22.659 { 00:18:22.659 "name": "BaseBdev3", 00:18:22.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.659 "is_configured": false, 00:18:22.659 "data_offset": 0, 00:18:22.659 "data_size": 0 00:18:22.659 }, 00:18:22.659 { 00:18:22.659 "name": "BaseBdev4", 00:18:22.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.659 "is_configured": false, 00:18:22.659 "data_offset": 0, 00:18:22.659 "data_size": 0 00:18:22.659 } 00:18:22.659 ] 00:18:22.659 }' 00:18:22.659 07:18:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:22.659 07:18:56 -- common/autotest_common.sh@10 -- # set +x 00:18:23.226 07:18:56 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:23.484 [2024-02-13 07:18:57.131071] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:23.485 [2024-02-13 07:18:57.131397] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:23.485 07:18:57 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:18:23.485 07:18:57 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:23.743 07:18:57 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:24.002 BaseBdev1 00:18:24.002 07:18:57 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:18:24.002 07:18:57 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:24.002 07:18:57 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:24.002 07:18:57 -- common/autotest_common.sh@887 -- # local i 00:18:24.002 07:18:57 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:24.002 07:18:57 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:24.002 07:18:57 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:24.260 07:18:57 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:24.518 [ 00:18:24.518 { 00:18:24.518 "name": "BaseBdev1", 00:18:24.518 "aliases": [ 00:18:24.518 "bb6120de-3266-433c-92e2-956cb9b3a242" 00:18:24.518 ], 00:18:24.518 "product_name": "Malloc disk", 00:18:24.518 "block_size": 512, 00:18:24.518 "num_blocks": 65536, 00:18:24.518 "uuid": "bb6120de-3266-433c-92e2-956cb9b3a242", 00:18:24.518 "assigned_rate_limits": { 00:18:24.518 "rw_ios_per_sec": 0, 00:18:24.518 "rw_mbytes_per_sec": 0, 00:18:24.518 "r_mbytes_per_sec": 0, 00:18:24.518 "w_mbytes_per_sec": 0 00:18:24.518 }, 00:18:24.518 "claimed": false, 00:18:24.518 "zoned": false, 00:18:24.518 "supported_io_types": { 00:18:24.518 "read": true, 00:18:24.518 "write": true, 00:18:24.518 "unmap": true, 00:18:24.518 "write_zeroes": true, 00:18:24.518 "flush": true, 00:18:24.518 "reset": true, 00:18:24.518 "compare": false, 00:18:24.518 "compare_and_write": false, 00:18:24.518 "abort": true, 00:18:24.518 "nvme_admin": false, 00:18:24.518 "nvme_io": false 00:18:24.518 }, 00:18:24.518 "memory_domains": [ 00:18:24.518 { 00:18:24.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.518 "dma_device_type": 2 00:18:24.518 } 00:18:24.518 ], 00:18:24.518 "driver_specific": {} 00:18:24.518 } 00:18:24.518 ] 00:18:24.518 07:18:58 -- common/autotest_common.sh@893 -- # return 0 00:18:24.518 07:18:58 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:24.777 [2024-02-13 07:18:58.263784] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:24.777 [2024-02-13 07:18:58.265726] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:24.777 [2024-02-13 07:18:58.265965] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:24.777 [2024-02-13 07:18:58.266092] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:24.777 [2024-02-13 07:18:58.266159] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:24.777 [2024-02-13 07:18:58.266266] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:24.777 [2024-02-13 07:18:58.266325] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:24.777 07:18:58 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:24.777 07:18:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:24.777 07:18:58 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:24.777 07:18:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:24.777 07:18:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:24.777 07:18:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:24.777 07:18:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:24.777 07:18:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:24.777 07:18:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:24.777 07:18:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:24.777 07:18:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:24.777 07:18:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:24.777 07:18:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.777 07:18:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.037 07:18:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:25.037 "name": "Existed_Raid", 00:18:25.037 "uuid": "8a6a3a6c-0c28-4099-8ff2-3eb447bf976f", 00:18:25.037 "strip_size_kb": 64, 00:18:25.037 "state": "configuring", 00:18:25.037 "raid_level": "concat", 00:18:25.037 "superblock": true, 00:18:25.037 "num_base_bdevs": 4, 00:18:25.037 "num_base_bdevs_discovered": 1, 00:18:25.037 "num_base_bdevs_operational": 4, 00:18:25.037 "base_bdevs_list": [ 00:18:25.037 { 00:18:25.037 "name": "BaseBdev1", 00:18:25.037 "uuid": "bb6120de-3266-433c-92e2-956cb9b3a242", 00:18:25.037 "is_configured": true, 00:18:25.037 "data_offset": 2048, 00:18:25.037 "data_size": 63488 00:18:25.037 }, 00:18:25.037 { 00:18:25.037 "name": "BaseBdev2", 00:18:25.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.037 "is_configured": false, 00:18:25.037 "data_offset": 0, 00:18:25.037 "data_size": 0 00:18:25.037 }, 00:18:25.037 { 00:18:25.037 "name": "BaseBdev3", 00:18:25.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.037 "is_configured": false, 00:18:25.037 "data_offset": 0, 00:18:25.037 "data_size": 0 00:18:25.037 }, 00:18:25.037 { 00:18:25.037 "name": "BaseBdev4", 00:18:25.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.037 "is_configured": false, 00:18:25.037 "data_offset": 0, 00:18:25.037 "data_size": 0 00:18:25.037 } 00:18:25.037 ] 00:18:25.037 }' 00:18:25.037 07:18:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:25.037 07:18:58 -- common/autotest_common.sh@10 -- # set +x 00:18:25.603 07:18:59 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:25.862 [2024-02-13 07:18:59.347342] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:25.862 BaseBdev2 00:18:25.862 07:18:59 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:25.862 07:18:59 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:18:25.862 07:18:59 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:25.862 07:18:59 -- common/autotest_common.sh@887 -- # local i 00:18:25.862 07:18:59 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:25.862 07:18:59 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:25.862 07:18:59 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:26.121 07:18:59 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:26.121 [ 00:18:26.121 { 00:18:26.121 "name": "BaseBdev2", 00:18:26.121 "aliases": [ 00:18:26.121 "26d81682-a57c-4f5d-b0b8-cfaae8a6e246" 00:18:26.121 ], 00:18:26.121 "product_name": "Malloc disk", 00:18:26.121 "block_size": 512, 00:18:26.121 "num_blocks": 65536, 00:18:26.121 "uuid": "26d81682-a57c-4f5d-b0b8-cfaae8a6e246", 00:18:26.121 "assigned_rate_limits": { 00:18:26.121 "rw_ios_per_sec": 0, 00:18:26.121 "rw_mbytes_per_sec": 0, 00:18:26.121 "r_mbytes_per_sec": 0, 00:18:26.121 "w_mbytes_per_sec": 0 00:18:26.121 }, 00:18:26.121 "claimed": true, 00:18:26.121 "claim_type": "exclusive_write", 00:18:26.121 "zoned": false, 00:18:26.121 "supported_io_types": { 00:18:26.121 "read": true, 00:18:26.121 "write": true, 00:18:26.121 "unmap": true, 00:18:26.121 "write_zeroes": true, 00:18:26.121 "flush": true, 00:18:26.121 "reset": true, 00:18:26.121 "compare": false, 00:18:26.121 "compare_and_write": false, 00:18:26.121 "abort": true, 00:18:26.121 "nvme_admin": false, 00:18:26.121 "nvme_io": false 00:18:26.121 }, 00:18:26.121 "memory_domains": [ 00:18:26.121 { 00:18:26.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.121 "dma_device_type": 2 00:18:26.121 } 00:18:26.121 ], 00:18:26.121 "driver_specific": {} 00:18:26.121 } 00:18:26.121 ] 00:18:26.121 07:18:59 -- common/autotest_common.sh@893 -- # return 0 00:18:26.121 07:18:59 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:26.121 07:18:59 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:26.121 07:18:59 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:26.121 07:18:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:26.121 07:18:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:26.121 07:18:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:26.121 07:18:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:26.121 07:18:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:26.121 07:18:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:26.121 07:18:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:26.121 07:18:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:26.121 07:18:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:26.121 07:18:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.121 07:18:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.379 07:18:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:26.379 "name": "Existed_Raid", 00:18:26.379 "uuid": "8a6a3a6c-0c28-4099-8ff2-3eb447bf976f", 00:18:26.379 "strip_size_kb": 64, 00:18:26.379 "state": "configuring", 00:18:26.379 "raid_level": "concat", 00:18:26.379 "superblock": true, 00:18:26.379 "num_base_bdevs": 4, 00:18:26.379 "num_base_bdevs_discovered": 2, 00:18:26.379 "num_base_bdevs_operational": 4, 00:18:26.379 "base_bdevs_list": [ 00:18:26.379 { 00:18:26.379 "name": "BaseBdev1", 00:18:26.379 "uuid": "bb6120de-3266-433c-92e2-956cb9b3a242", 00:18:26.379 "is_configured": true, 00:18:26.379 "data_offset": 2048, 00:18:26.379 "data_size": 63488 00:18:26.379 }, 00:18:26.379 { 00:18:26.379 "name": "BaseBdev2", 00:18:26.379 "uuid": "26d81682-a57c-4f5d-b0b8-cfaae8a6e246", 00:18:26.379 "is_configured": true, 00:18:26.379 "data_offset": 2048, 00:18:26.379 "data_size": 63488 00:18:26.379 }, 00:18:26.379 { 00:18:26.379 "name": "BaseBdev3", 00:18:26.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.379 "is_configured": false, 00:18:26.379 "data_offset": 0, 00:18:26.379 "data_size": 0 00:18:26.379 }, 00:18:26.379 { 00:18:26.379 "name": "BaseBdev4", 00:18:26.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.379 "is_configured": false, 00:18:26.379 "data_offset": 0, 00:18:26.379 "data_size": 0 00:18:26.379 } 00:18:26.379 ] 00:18:26.379 }' 00:18:26.379 07:18:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:26.379 07:18:59 -- common/autotest_common.sh@10 -- # set +x 00:18:27.313 07:19:00 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:27.313 [2024-02-13 07:19:00.877530] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:27.313 BaseBdev3 00:18:27.313 07:19:00 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:27.313 07:19:00 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:18:27.313 07:19:00 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:27.313 07:19:00 -- common/autotest_common.sh@887 -- # local i 00:18:27.314 07:19:00 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:27.314 07:19:00 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:27.314 07:19:00 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:27.572 07:19:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:27.572 [ 00:18:27.572 { 00:18:27.572 "name": "BaseBdev3", 00:18:27.572 "aliases": [ 00:18:27.572 "57f7e5dc-af95-42b6-afdf-a56c9727f513" 00:18:27.572 ], 00:18:27.572 "product_name": "Malloc disk", 00:18:27.572 "block_size": 512, 00:18:27.572 "num_blocks": 65536, 00:18:27.572 "uuid": "57f7e5dc-af95-42b6-afdf-a56c9727f513", 00:18:27.572 "assigned_rate_limits": { 00:18:27.572 "rw_ios_per_sec": 0, 00:18:27.572 "rw_mbytes_per_sec": 0, 00:18:27.572 "r_mbytes_per_sec": 0, 00:18:27.572 "w_mbytes_per_sec": 0 00:18:27.572 }, 00:18:27.572 "claimed": true, 00:18:27.572 "claim_type": "exclusive_write", 00:18:27.572 "zoned": false, 00:18:27.572 "supported_io_types": { 00:18:27.572 "read": true, 00:18:27.572 "write": true, 00:18:27.572 "unmap": true, 00:18:27.572 "write_zeroes": true, 00:18:27.572 "flush": true, 00:18:27.572 "reset": true, 00:18:27.572 "compare": false, 00:18:27.572 "compare_and_write": false, 00:18:27.572 "abort": true, 00:18:27.572 "nvme_admin": false, 00:18:27.572 "nvme_io": false 00:18:27.572 }, 00:18:27.572 "memory_domains": [ 00:18:27.572 { 00:18:27.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.572 "dma_device_type": 2 00:18:27.572 } 00:18:27.572 ], 00:18:27.572 "driver_specific": {} 00:18:27.572 } 00:18:27.572 ] 00:18:27.831 07:19:01 -- common/autotest_common.sh@893 -- # return 0 00:18:27.831 07:19:01 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:27.831 07:19:01 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:27.831 07:19:01 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:18:27.831 07:19:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:27.831 07:19:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:27.831 07:19:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:27.831 07:19:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:27.831 07:19:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:27.831 07:19:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:27.831 07:19:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:27.831 07:19:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:27.831 07:19:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:27.831 07:19:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:27.831 07:19:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:27.831 07:19:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:27.831 "name": "Existed_Raid", 00:18:27.831 "uuid": "8a6a3a6c-0c28-4099-8ff2-3eb447bf976f", 00:18:27.831 "strip_size_kb": 64, 00:18:27.831 "state": "configuring", 00:18:27.831 "raid_level": "concat", 00:18:27.831 "superblock": true, 00:18:27.831 "num_base_bdevs": 4, 00:18:27.831 "num_base_bdevs_discovered": 3, 00:18:27.831 "num_base_bdevs_operational": 4, 00:18:27.831 "base_bdevs_list": [ 00:18:27.831 { 00:18:27.831 "name": "BaseBdev1", 00:18:27.831 "uuid": "bb6120de-3266-433c-92e2-956cb9b3a242", 00:18:27.831 "is_configured": true, 00:18:27.831 "data_offset": 2048, 00:18:27.831 "data_size": 63488 00:18:27.831 }, 00:18:27.831 { 00:18:27.831 "name": "BaseBdev2", 00:18:27.831 "uuid": "26d81682-a57c-4f5d-b0b8-cfaae8a6e246", 00:18:27.831 "is_configured": true, 00:18:27.831 "data_offset": 2048, 00:18:27.831 "data_size": 63488 00:18:27.831 }, 00:18:27.831 { 00:18:27.831 "name": "BaseBdev3", 00:18:27.831 "uuid": "57f7e5dc-af95-42b6-afdf-a56c9727f513", 00:18:27.831 "is_configured": true, 00:18:27.831 "data_offset": 2048, 00:18:27.831 "data_size": 63488 00:18:27.831 }, 00:18:27.831 { 00:18:27.831 "name": "BaseBdev4", 00:18:27.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.831 "is_configured": false, 00:18:27.831 "data_offset": 0, 00:18:27.831 "data_size": 0 00:18:27.831 } 00:18:27.831 ] 00:18:27.831 }' 00:18:27.831 07:19:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:27.831 07:19:01 -- common/autotest_common.sh@10 -- # set +x 00:18:28.765 07:19:02 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:29.023 [2024-02-13 07:19:02.465969] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:29.023 [2024-02-13 07:19:02.466387] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:18:29.023 [2024-02-13 07:19:02.466509] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:29.023 BaseBdev4 00:18:29.023 [2024-02-13 07:19:02.466689] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:18:29.023 [2024-02-13 07:19:02.467240] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:18:29.023 [2024-02-13 07:19:02.467366] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:18:29.023 [2024-02-13 07:19:02.467612] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.023 07:19:02 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:29.023 07:19:02 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:18:29.023 07:19:02 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:29.023 07:19:02 -- common/autotest_common.sh@887 -- # local i 00:18:29.023 07:19:02 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:29.023 07:19:02 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:29.023 07:19:02 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:29.023 07:19:02 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:29.280 [ 00:18:29.280 { 00:18:29.280 "name": "BaseBdev4", 00:18:29.280 "aliases": [ 00:18:29.280 "5fd6d3cc-f872-4eec-bcca-12a9338f293d" 00:18:29.280 ], 00:18:29.280 "product_name": "Malloc disk", 00:18:29.280 "block_size": 512, 00:18:29.280 "num_blocks": 65536, 00:18:29.280 "uuid": "5fd6d3cc-f872-4eec-bcca-12a9338f293d", 00:18:29.280 "assigned_rate_limits": { 00:18:29.280 "rw_ios_per_sec": 0, 00:18:29.280 "rw_mbytes_per_sec": 0, 00:18:29.280 "r_mbytes_per_sec": 0, 00:18:29.280 "w_mbytes_per_sec": 0 00:18:29.280 }, 00:18:29.280 "claimed": true, 00:18:29.280 "claim_type": "exclusive_write", 00:18:29.280 "zoned": false, 00:18:29.280 "supported_io_types": { 00:18:29.280 "read": true, 00:18:29.280 "write": true, 00:18:29.280 "unmap": true, 00:18:29.280 "write_zeroes": true, 00:18:29.280 "flush": true, 00:18:29.280 "reset": true, 00:18:29.280 "compare": false, 00:18:29.280 "compare_and_write": false, 00:18:29.281 "abort": true, 00:18:29.281 "nvme_admin": false, 00:18:29.281 "nvme_io": false 00:18:29.281 }, 00:18:29.281 "memory_domains": [ 00:18:29.281 { 00:18:29.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.281 "dma_device_type": 2 00:18:29.281 } 00:18:29.281 ], 00:18:29.281 "driver_specific": {} 00:18:29.281 } 00:18:29.281 ] 00:18:29.281 07:19:02 -- common/autotest_common.sh@893 -- # return 0 00:18:29.281 07:19:02 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:29.281 07:19:02 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:29.281 07:19:02 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:18:29.281 07:19:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:29.281 07:19:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:29.281 07:19:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:29.281 07:19:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:29.281 07:19:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:29.281 07:19:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:29.281 07:19:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:29.281 07:19:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:29.281 07:19:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:29.281 07:19:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.281 07:19:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.539 07:19:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:29.539 "name": "Existed_Raid", 00:18:29.539 "uuid": "8a6a3a6c-0c28-4099-8ff2-3eb447bf976f", 00:18:29.539 "strip_size_kb": 64, 00:18:29.539 "state": "online", 00:18:29.539 "raid_level": "concat", 00:18:29.539 "superblock": true, 00:18:29.539 "num_base_bdevs": 4, 00:18:29.539 "num_base_bdevs_discovered": 4, 00:18:29.539 "num_base_bdevs_operational": 4, 00:18:29.539 "base_bdevs_list": [ 00:18:29.539 { 00:18:29.539 "name": "BaseBdev1", 00:18:29.539 "uuid": "bb6120de-3266-433c-92e2-956cb9b3a242", 00:18:29.539 "is_configured": true, 00:18:29.539 "data_offset": 2048, 00:18:29.539 "data_size": 63488 00:18:29.539 }, 00:18:29.539 { 00:18:29.539 "name": "BaseBdev2", 00:18:29.539 "uuid": "26d81682-a57c-4f5d-b0b8-cfaae8a6e246", 00:18:29.539 "is_configured": true, 00:18:29.539 "data_offset": 2048, 00:18:29.539 "data_size": 63488 00:18:29.539 }, 00:18:29.539 { 00:18:29.539 "name": "BaseBdev3", 00:18:29.539 "uuid": "57f7e5dc-af95-42b6-afdf-a56c9727f513", 00:18:29.539 "is_configured": true, 00:18:29.539 "data_offset": 2048, 00:18:29.539 "data_size": 63488 00:18:29.539 }, 00:18:29.539 { 00:18:29.539 "name": "BaseBdev4", 00:18:29.539 "uuid": "5fd6d3cc-f872-4eec-bcca-12a9338f293d", 00:18:29.539 "is_configured": true, 00:18:29.539 "data_offset": 2048, 00:18:29.539 "data_size": 63488 00:18:29.539 } 00:18:29.539 ] 00:18:29.539 }' 00:18:29.539 07:19:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:29.539 07:19:03 -- common/autotest_common.sh@10 -- # set +x 00:18:30.145 07:19:03 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:30.404 [2024-02-13 07:19:03.978507] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:30.404 [2024-02-13 07:19:03.978702] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:30.404 [2024-02-13 07:19:03.978908] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:30.404 07:19:04 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:30.404 07:19:04 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:18:30.404 07:19:04 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:30.404 07:19:04 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:30.404 07:19:04 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:18:30.404 07:19:04 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:18:30.404 07:19:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:30.404 07:19:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:18:30.404 07:19:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:30.404 07:19:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:30.404 07:19:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:30.404 07:19:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:30.404 07:19:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:30.404 07:19:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:30.404 07:19:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:30.404 07:19:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:30.404 07:19:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.663 07:19:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:30.663 "name": "Existed_Raid", 00:18:30.663 "uuid": "8a6a3a6c-0c28-4099-8ff2-3eb447bf976f", 00:18:30.663 "strip_size_kb": 64, 00:18:30.663 "state": "offline", 00:18:30.663 "raid_level": "concat", 00:18:30.663 "superblock": true, 00:18:30.663 "num_base_bdevs": 4, 00:18:30.663 "num_base_bdevs_discovered": 3, 00:18:30.663 "num_base_bdevs_operational": 3, 00:18:30.663 "base_bdevs_list": [ 00:18:30.663 { 00:18:30.663 "name": null, 00:18:30.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.663 "is_configured": false, 00:18:30.663 "data_offset": 2048, 00:18:30.663 "data_size": 63488 00:18:30.663 }, 00:18:30.663 { 00:18:30.663 "name": "BaseBdev2", 00:18:30.663 "uuid": "26d81682-a57c-4f5d-b0b8-cfaae8a6e246", 00:18:30.663 "is_configured": true, 00:18:30.663 "data_offset": 2048, 00:18:30.663 "data_size": 63488 00:18:30.663 }, 00:18:30.663 { 00:18:30.663 "name": "BaseBdev3", 00:18:30.663 "uuid": "57f7e5dc-af95-42b6-afdf-a56c9727f513", 00:18:30.663 "is_configured": true, 00:18:30.663 "data_offset": 2048, 00:18:30.663 "data_size": 63488 00:18:30.663 }, 00:18:30.663 { 00:18:30.663 "name": "BaseBdev4", 00:18:30.663 "uuid": "5fd6d3cc-f872-4eec-bcca-12a9338f293d", 00:18:30.663 "is_configured": true, 00:18:30.663 "data_offset": 2048, 00:18:30.663 "data_size": 63488 00:18:30.663 } 00:18:30.663 ] 00:18:30.663 }' 00:18:30.663 07:19:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:30.663 07:19:04 -- common/autotest_common.sh@10 -- # set +x 00:18:31.599 07:19:04 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:31.599 07:19:04 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:31.599 07:19:04 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.599 07:19:04 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:31.599 07:19:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:31.599 07:19:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:31.599 07:19:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:31.857 [2024-02-13 07:19:05.433762] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:31.857 07:19:05 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:31.857 07:19:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:31.857 07:19:05 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.857 07:19:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:32.116 07:19:05 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:32.116 07:19:05 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:32.116 07:19:05 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:32.374 [2024-02-13 07:19:05.965931] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:32.374 07:19:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:32.374 07:19:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:32.374 07:19:06 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.374 07:19:06 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:32.636 07:19:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:32.636 07:19:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:32.636 07:19:06 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:32.894 [2024-02-13 07:19:06.425455] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:32.894 [2024-02-13 07:19:06.425677] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:18:32.894 07:19:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:32.894 07:19:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:32.894 07:19:06 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.894 07:19:06 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:33.185 07:19:06 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:33.185 07:19:06 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:33.185 07:19:06 -- bdev/bdev_raid.sh@287 -- # killprocess 124753 00:18:33.185 07:19:06 -- common/autotest_common.sh@924 -- # '[' -z 124753 ']' 00:18:33.185 07:19:06 -- common/autotest_common.sh@928 -- # kill -0 124753 00:18:33.185 07:19:06 -- common/autotest_common.sh@929 -- # uname 00:18:33.185 07:19:06 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:33.185 07:19:06 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 124753 00:18:33.185 killing process with pid 124753 00:18:33.185 07:19:06 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:18:33.185 07:19:06 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:18:33.185 07:19:06 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 124753' 00:18:33.185 07:19:06 -- common/autotest_common.sh@943 -- # kill 124753 00:18:33.185 07:19:06 -- common/autotest_common.sh@948 -- # wait 124753 00:18:33.185 [2024-02-13 07:19:06.729561] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:33.185 [2024-02-13 07:19:06.729730] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:34.120 ************************************ 00:18:34.120 END TEST raid_state_function_test_sb 00:18:34.120 ************************************ 00:18:34.120 07:19:07 -- bdev/bdev_raid.sh@289 -- # return 0 00:18:34.120 00:18:34.120 real 0m15.186s 00:18:34.120 user 0m27.305s 00:18:34.120 sys 0m1.601s 00:18:34.120 07:19:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:34.120 07:19:07 -- common/autotest_common.sh@10 -- # set +x 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:18:34.379 07:19:07 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:18:34.379 07:19:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:18:34.379 07:19:07 -- common/autotest_common.sh@10 -- # set +x 00:18:34.379 ************************************ 00:18:34.379 START TEST raid_superblock_test 00:18:34.379 ************************************ 00:18:34.379 07:19:07 -- common/autotest_common.sh@1102 -- # raid_superblock_test concat 4 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@357 -- # raid_pid=125242 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:18:34.379 07:19:07 -- bdev/bdev_raid.sh@358 -- # waitforlisten 125242 /var/tmp/spdk-raid.sock 00:18:34.379 07:19:07 -- common/autotest_common.sh@817 -- # '[' -z 125242 ']' 00:18:34.379 07:19:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:34.379 07:19:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:34.379 07:19:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:34.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:34.379 07:19:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:34.379 07:19:07 -- common/autotest_common.sh@10 -- # set +x 00:18:34.379 [2024-02-13 07:19:07.892852] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:18:34.379 [2024-02-13 07:19:07.893386] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125242 ] 00:18:34.379 [2024-02-13 07:19:08.049551] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.638 [2024-02-13 07:19:08.232604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.898 [2024-02-13 07:19:08.419808] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:35.465 07:19:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:35.465 07:19:08 -- common/autotest_common.sh@850 -- # return 0 00:18:35.465 07:19:08 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:18:35.465 07:19:08 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:35.465 07:19:08 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:18:35.465 07:19:08 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:18:35.465 07:19:08 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:35.465 07:19:08 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.465 07:19:08 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.465 07:19:08 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.465 07:19:08 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:18:35.465 malloc1 00:18:35.465 07:19:09 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:35.724 [2024-02-13 07:19:09.327542] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:35.724 [2024-02-13 07:19:09.327819] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.724 [2024-02-13 07:19:09.327978] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:18:35.724 [2024-02-13 07:19:09.328132] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.724 [2024-02-13 07:19:09.330830] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.724 [2024-02-13 07:19:09.331058] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:35.724 pt1 00:18:35.724 07:19:09 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:35.724 07:19:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:35.724 07:19:09 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:18:35.724 07:19:09 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:18:35.724 07:19:09 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:35.724 07:19:09 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.724 07:19:09 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.724 07:19:09 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.724 07:19:09 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:18:35.982 malloc2 00:18:35.982 07:19:09 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:36.240 [2024-02-13 07:19:09.833668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:36.240 [2024-02-13 07:19:09.833940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.240 [2024-02-13 07:19:09.834034] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:18:36.240 [2024-02-13 07:19:09.834369] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.240 [2024-02-13 07:19:09.836521] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.240 [2024-02-13 07:19:09.836721] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:36.240 pt2 00:18:36.240 07:19:09 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:36.240 07:19:09 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:36.240 07:19:09 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:18:36.240 07:19:09 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:18:36.240 07:19:09 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:36.240 07:19:09 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:36.240 07:19:09 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:36.240 07:19:09 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:36.240 07:19:09 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:18:36.498 malloc3 00:18:36.499 07:19:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:36.757 [2024-02-13 07:19:10.266250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:36.757 [2024-02-13 07:19:10.266503] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.757 [2024-02-13 07:19:10.266592] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:18:36.757 [2024-02-13 07:19:10.266910] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.757 [2024-02-13 07:19:10.269474] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.757 [2024-02-13 07:19:10.269668] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:36.757 pt3 00:18:36.757 07:19:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:36.757 07:19:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:36.757 07:19:10 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:18:36.757 07:19:10 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:18:36.757 07:19:10 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:36.757 07:19:10 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:36.757 07:19:10 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:18:36.757 07:19:10 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:36.757 07:19:10 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:18:37.016 malloc4 00:18:37.016 07:19:10 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:37.016 [2024-02-13 07:19:10.705198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:37.016 [2024-02-13 07:19:10.705464] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.016 [2024-02-13 07:19:10.705584] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:37.016 [2024-02-13 07:19:10.705898] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.016 [2024-02-13 07:19:10.708750] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.016 [2024-02-13 07:19:10.708938] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:37.276 pt4 00:18:37.276 07:19:10 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:18:37.276 07:19:10 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:18:37.276 07:19:10 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:18:37.276 [2024-02-13 07:19:10.905451] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:37.276 [2024-02-13 07:19:10.907363] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:37.276 [2024-02-13 07:19:10.907585] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:37.276 [2024-02-13 07:19:10.907714] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:37.276 [2024-02-13 07:19:10.908051] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:18:37.276 [2024-02-13 07:19:10.908196] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:37.276 [2024-02-13 07:19:10.908385] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:18:37.276 [2024-02-13 07:19:10.908879] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:18:37.276 [2024-02-13 07:19:10.909023] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:18:37.276 [2024-02-13 07:19:10.909364] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:37.276 07:19:10 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:37.276 07:19:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:37.276 07:19:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:37.276 07:19:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:37.276 07:19:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:37.276 07:19:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:37.276 07:19:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:37.276 07:19:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:37.276 07:19:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:37.276 07:19:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:37.276 07:19:10 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.276 07:19:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.535 07:19:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:37.535 "name": "raid_bdev1", 00:18:37.535 "uuid": "2c4feb19-a00c-43c8-abbd-e290600e81e5", 00:18:37.535 "strip_size_kb": 64, 00:18:37.535 "state": "online", 00:18:37.535 "raid_level": "concat", 00:18:37.535 "superblock": true, 00:18:37.535 "num_base_bdevs": 4, 00:18:37.535 "num_base_bdevs_discovered": 4, 00:18:37.535 "num_base_bdevs_operational": 4, 00:18:37.535 "base_bdevs_list": [ 00:18:37.535 { 00:18:37.535 "name": "pt1", 00:18:37.535 "uuid": "1fb4370c-204c-5e6c-a5b2-0192c873ff5d", 00:18:37.535 "is_configured": true, 00:18:37.535 "data_offset": 2048, 00:18:37.535 "data_size": 63488 00:18:37.535 }, 00:18:37.535 { 00:18:37.535 "name": "pt2", 00:18:37.535 "uuid": "f13e5550-523c-5b51-ac4a-bd1a2422a625", 00:18:37.535 "is_configured": true, 00:18:37.535 "data_offset": 2048, 00:18:37.535 "data_size": 63488 00:18:37.535 }, 00:18:37.535 { 00:18:37.535 "name": "pt3", 00:18:37.535 "uuid": "d20bf58d-7e76-59ef-9826-d80493a535c0", 00:18:37.535 "is_configured": true, 00:18:37.535 "data_offset": 2048, 00:18:37.535 "data_size": 63488 00:18:37.535 }, 00:18:37.535 { 00:18:37.535 "name": "pt4", 00:18:37.535 "uuid": "383c1092-9d60-5f10-8dbe-a35bc1880cde", 00:18:37.535 "is_configured": true, 00:18:37.535 "data_offset": 2048, 00:18:37.535 "data_size": 63488 00:18:37.535 } 00:18:37.535 ] 00:18:37.535 }' 00:18:37.535 07:19:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:37.535 07:19:11 -- common/autotest_common.sh@10 -- # set +x 00:18:38.472 07:19:11 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:38.472 07:19:11 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:18:38.472 [2024-02-13 07:19:12.009993] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:38.472 07:19:12 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=2c4feb19-a00c-43c8-abbd-e290600e81e5 00:18:38.472 07:19:12 -- bdev/bdev_raid.sh@380 -- # '[' -z 2c4feb19-a00c-43c8-abbd-e290600e81e5 ']' 00:18:38.472 07:19:12 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:38.736 [2024-02-13 07:19:12.201753] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:38.736 [2024-02-13 07:19:12.201930] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:38.736 [2024-02-13 07:19:12.202109] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:38.736 [2024-02-13 07:19:12.202303] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:38.736 [2024-02-13 07:19:12.202420] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:18:38.736 07:19:12 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:38.736 07:19:12 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:18:38.736 07:19:12 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:18:38.736 07:19:12 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:18:38.736 07:19:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:38.736 07:19:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:39.002 07:19:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.002 07:19:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:39.261 07:19:12 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.262 07:19:12 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:18:39.520 07:19:13 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:18:39.520 07:19:13 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:18:39.779 07:19:13 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:39.779 07:19:13 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:40.037 07:19:13 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:18:40.037 07:19:13 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:40.037 07:19:13 -- common/autotest_common.sh@638 -- # local es=0 00:18:40.037 07:19:13 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:40.037 07:19:13 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.037 07:19:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:40.037 07:19:13 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.037 07:19:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:40.037 07:19:13 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.037 07:19:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:40.037 07:19:13 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:40.037 07:19:13 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:40.037 07:19:13 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:18:40.037 [2024-02-13 07:19:13.718120] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:40.037 [2024-02-13 07:19:13.720044] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:40.037 [2024-02-13 07:19:13.720226] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:40.037 [2024-02-13 07:19:13.720319] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:40.037 [2024-02-13 07:19:13.720555] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:18:40.037 [2024-02-13 07:19:13.720800] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:18:40.038 [2024-02-13 07:19:13.720989] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:18:40.038 [2024-02-13 07:19:13.721160] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:18:40.038 [2024-02-13 07:19:13.721234] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.038 [2024-02-13 07:19:13.721345] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:18:40.038 request: 00:18:40.038 { 00:18:40.038 "name": "raid_bdev1", 00:18:40.038 "raid_level": "concat", 00:18:40.038 "base_bdevs": [ 00:18:40.038 "malloc1", 00:18:40.038 "malloc2", 00:18:40.038 "malloc3", 00:18:40.038 "malloc4" 00:18:40.038 ], 00:18:40.038 "superblock": false, 00:18:40.038 "strip_size_kb": 64, 00:18:40.038 "method": "bdev_raid_create", 00:18:40.038 "req_id": 1 00:18:40.038 } 00:18:40.038 Got JSON-RPC error response 00:18:40.038 response: 00:18:40.038 { 00:18:40.038 "code": -17, 00:18:40.038 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:40.038 } 00:18:40.296 07:19:13 -- common/autotest_common.sh@641 -- # es=1 00:18:40.297 07:19:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:40.297 07:19:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:40.297 07:19:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:40.297 07:19:13 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.297 07:19:13 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:18:40.297 07:19:13 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:18:40.297 07:19:13 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:18:40.297 07:19:13 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:40.555 [2024-02-13 07:19:14.154314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:40.555 [2024-02-13 07:19:14.154803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.555 [2024-02-13 07:19:14.155017] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:40.555 [2024-02-13 07:19:14.155161] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.555 [2024-02-13 07:19:14.157774] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.555 [2024-02-13 07:19:14.157995] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:40.556 [2024-02-13 07:19:14.158293] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:18:40.556 [2024-02-13 07:19:14.158517] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:40.556 pt1 00:18:40.556 07:19:14 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:40.556 07:19:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:40.556 07:19:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:40.556 07:19:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:40.556 07:19:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:40.556 07:19:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:40.556 07:19:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:40.556 07:19:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:40.556 07:19:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:40.556 07:19:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:40.556 07:19:14 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:40.556 07:19:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.815 07:19:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:40.815 "name": "raid_bdev1", 00:18:40.815 "uuid": "2c4feb19-a00c-43c8-abbd-e290600e81e5", 00:18:40.815 "strip_size_kb": 64, 00:18:40.815 "state": "configuring", 00:18:40.815 "raid_level": "concat", 00:18:40.815 "superblock": true, 00:18:40.815 "num_base_bdevs": 4, 00:18:40.815 "num_base_bdevs_discovered": 1, 00:18:40.815 "num_base_bdevs_operational": 4, 00:18:40.815 "base_bdevs_list": [ 00:18:40.815 { 00:18:40.815 "name": "pt1", 00:18:40.815 "uuid": "1fb4370c-204c-5e6c-a5b2-0192c873ff5d", 00:18:40.815 "is_configured": true, 00:18:40.815 "data_offset": 2048, 00:18:40.815 "data_size": 63488 00:18:40.815 }, 00:18:40.815 { 00:18:40.815 "name": null, 00:18:40.815 "uuid": "f13e5550-523c-5b51-ac4a-bd1a2422a625", 00:18:40.815 "is_configured": false, 00:18:40.815 "data_offset": 2048, 00:18:40.815 "data_size": 63488 00:18:40.815 }, 00:18:40.815 { 00:18:40.815 "name": null, 00:18:40.815 "uuid": "d20bf58d-7e76-59ef-9826-d80493a535c0", 00:18:40.815 "is_configured": false, 00:18:40.815 "data_offset": 2048, 00:18:40.815 "data_size": 63488 00:18:40.815 }, 00:18:40.815 { 00:18:40.815 "name": null, 00:18:40.815 "uuid": "383c1092-9d60-5f10-8dbe-a35bc1880cde", 00:18:40.815 "is_configured": false, 00:18:40.815 "data_offset": 2048, 00:18:40.815 "data_size": 63488 00:18:40.815 } 00:18:40.815 ] 00:18:40.815 }' 00:18:40.815 07:19:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:40.815 07:19:14 -- common/autotest_common.sh@10 -- # set +x 00:18:41.753 07:19:15 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:18:41.753 07:19:15 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:41.753 [2024-02-13 07:19:15.278603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:41.753 [2024-02-13 07:19:15.278878] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.753 [2024-02-13 07:19:15.278981] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:41.753 [2024-02-13 07:19:15.279219] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.753 [2024-02-13 07:19:15.279964] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.753 [2024-02-13 07:19:15.280130] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:41.753 [2024-02-13 07:19:15.280354] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:41.753 [2024-02-13 07:19:15.280486] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.753 pt2 00:18:41.753 07:19:15 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:42.012 [2024-02-13 07:19:15.486698] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:42.012 07:19:15 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:18:42.012 07:19:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:42.012 07:19:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:42.012 07:19:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:42.012 07:19:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:42.012 07:19:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:42.012 07:19:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:42.012 07:19:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:42.012 07:19:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:42.012 07:19:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:42.012 07:19:15 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:42.012 07:19:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.012 07:19:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:42.012 "name": "raid_bdev1", 00:18:42.012 "uuid": "2c4feb19-a00c-43c8-abbd-e290600e81e5", 00:18:42.012 "strip_size_kb": 64, 00:18:42.012 "state": "configuring", 00:18:42.012 "raid_level": "concat", 00:18:42.012 "superblock": true, 00:18:42.012 "num_base_bdevs": 4, 00:18:42.012 "num_base_bdevs_discovered": 1, 00:18:42.012 "num_base_bdevs_operational": 4, 00:18:42.012 "base_bdevs_list": [ 00:18:42.012 { 00:18:42.012 "name": "pt1", 00:18:42.012 "uuid": "1fb4370c-204c-5e6c-a5b2-0192c873ff5d", 00:18:42.012 "is_configured": true, 00:18:42.012 "data_offset": 2048, 00:18:42.012 "data_size": 63488 00:18:42.012 }, 00:18:42.012 { 00:18:42.012 "name": null, 00:18:42.012 "uuid": "f13e5550-523c-5b51-ac4a-bd1a2422a625", 00:18:42.012 "is_configured": false, 00:18:42.012 "data_offset": 2048, 00:18:42.012 "data_size": 63488 00:18:42.012 }, 00:18:42.012 { 00:18:42.012 "name": null, 00:18:42.012 "uuid": "d20bf58d-7e76-59ef-9826-d80493a535c0", 00:18:42.012 "is_configured": false, 00:18:42.012 "data_offset": 2048, 00:18:42.012 "data_size": 63488 00:18:42.012 }, 00:18:42.012 { 00:18:42.012 "name": null, 00:18:42.012 "uuid": "383c1092-9d60-5f10-8dbe-a35bc1880cde", 00:18:42.012 "is_configured": false, 00:18:42.012 "data_offset": 2048, 00:18:42.012 "data_size": 63488 00:18:42.012 } 00:18:42.012 ] 00:18:42.012 }' 00:18:42.012 07:19:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:42.012 07:19:15 -- common/autotest_common.sh@10 -- # set +x 00:18:42.949 07:19:16 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:18:42.949 07:19:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:42.949 07:19:16 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:42.949 [2024-02-13 07:19:16.582832] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:42.949 [2024-02-13 07:19:16.583073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.949 [2024-02-13 07:19:16.583156] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:42.949 [2024-02-13 07:19:16.583330] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.949 [2024-02-13 07:19:16.583933] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.949 [2024-02-13 07:19:16.584114] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:42.949 [2024-02-13 07:19:16.584342] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:18:42.949 [2024-02-13 07:19:16.584472] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:42.949 pt2 00:18:42.949 07:19:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:42.949 07:19:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:42.949 07:19:16 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:43.209 [2024-02-13 07:19:16.838879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:43.209 [2024-02-13 07:19:16.839135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.209 [2024-02-13 07:19:16.839220] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:43.209 [2024-02-13 07:19:16.839534] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.209 [2024-02-13 07:19:16.840057] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.209 [2024-02-13 07:19:16.840251] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:43.209 [2024-02-13 07:19:16.840476] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:18:43.209 [2024-02-13 07:19:16.840639] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:43.209 pt3 00:18:43.209 07:19:16 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:43.209 07:19:16 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:43.209 07:19:16 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:43.468 [2024-02-13 07:19:17.102934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:43.468 [2024-02-13 07:19:17.103158] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.468 [2024-02-13 07:19:17.103242] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:43.468 [2024-02-13 07:19:17.103364] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.468 [2024-02-13 07:19:17.103803] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.468 [2024-02-13 07:19:17.103983] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:43.468 [2024-02-13 07:19:17.104175] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:18:43.468 [2024-02-13 07:19:17.104297] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:43.468 [2024-02-13 07:19:17.104569] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:18:43.468 [2024-02-13 07:19:17.104682] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:18:43.468 [2024-02-13 07:19:17.104906] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:43.468 [2024-02-13 07:19:17.105389] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:18:43.468 [2024-02-13 07:19:17.105545] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:18:43.468 [2024-02-13 07:19:17.105771] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.468 pt4 00:18:43.468 07:19:17 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:18:43.468 07:19:17 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:18:43.468 07:19:17 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:18:43.468 07:19:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:18:43.468 07:19:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:43.468 07:19:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:18:43.468 07:19:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:18:43.468 07:19:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:43.468 07:19:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:43.468 07:19:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:43.468 07:19:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:43.468 07:19:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:43.468 07:19:17 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:43.468 07:19:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.727 07:19:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:43.727 "name": "raid_bdev1", 00:18:43.727 "uuid": "2c4feb19-a00c-43c8-abbd-e290600e81e5", 00:18:43.727 "strip_size_kb": 64, 00:18:43.727 "state": "online", 00:18:43.727 "raid_level": "concat", 00:18:43.727 "superblock": true, 00:18:43.727 "num_base_bdevs": 4, 00:18:43.727 "num_base_bdevs_discovered": 4, 00:18:43.727 "num_base_bdevs_operational": 4, 00:18:43.727 "base_bdevs_list": [ 00:18:43.727 { 00:18:43.727 "name": "pt1", 00:18:43.727 "uuid": "1fb4370c-204c-5e6c-a5b2-0192c873ff5d", 00:18:43.727 "is_configured": true, 00:18:43.727 "data_offset": 2048, 00:18:43.727 "data_size": 63488 00:18:43.727 }, 00:18:43.727 { 00:18:43.727 "name": "pt2", 00:18:43.727 "uuid": "f13e5550-523c-5b51-ac4a-bd1a2422a625", 00:18:43.727 "is_configured": true, 00:18:43.727 "data_offset": 2048, 00:18:43.727 "data_size": 63488 00:18:43.727 }, 00:18:43.727 { 00:18:43.727 "name": "pt3", 00:18:43.727 "uuid": "d20bf58d-7e76-59ef-9826-d80493a535c0", 00:18:43.727 "is_configured": true, 00:18:43.727 "data_offset": 2048, 00:18:43.727 "data_size": 63488 00:18:43.727 }, 00:18:43.727 { 00:18:43.727 "name": "pt4", 00:18:43.727 "uuid": "383c1092-9d60-5f10-8dbe-a35bc1880cde", 00:18:43.727 "is_configured": true, 00:18:43.727 "data_offset": 2048, 00:18:43.727 "data_size": 63488 00:18:43.727 } 00:18:43.727 ] 00:18:43.727 }' 00:18:43.727 07:19:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:43.727 07:19:17 -- common/autotest_common.sh@10 -- # set +x 00:18:44.664 07:19:18 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:44.664 07:19:18 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:18:44.664 [2024-02-13 07:19:18.215448] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:44.664 07:19:18 -- bdev/bdev_raid.sh@430 -- # '[' 2c4feb19-a00c-43c8-abbd-e290600e81e5 '!=' 2c4feb19-a00c-43c8-abbd-e290600e81e5 ']' 00:18:44.664 07:19:18 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:18:44.664 07:19:18 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:44.664 07:19:18 -- bdev/bdev_raid.sh@197 -- # return 1 00:18:44.664 07:19:18 -- bdev/bdev_raid.sh@511 -- # killprocess 125242 00:18:44.664 07:19:18 -- common/autotest_common.sh@924 -- # '[' -z 125242 ']' 00:18:44.664 07:19:18 -- common/autotest_common.sh@928 -- # kill -0 125242 00:18:44.664 07:19:18 -- common/autotest_common.sh@929 -- # uname 00:18:44.664 07:19:18 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:44.664 07:19:18 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 125242 00:18:44.664 killing process with pid 125242 00:18:44.664 07:19:18 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:18:44.664 07:19:18 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:18:44.664 07:19:18 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 125242' 00:18:44.664 07:19:18 -- common/autotest_common.sh@943 -- # kill 125242 00:18:44.664 07:19:18 -- common/autotest_common.sh@948 -- # wait 125242 00:18:44.664 [2024-02-13 07:19:18.254365] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:44.664 [2024-02-13 07:19:18.254444] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:44.664 [2024-02-13 07:19:18.254554] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:44.664 [2024-02-13 07:19:18.254613] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:18:44.923 [2024-02-13 07:19:18.525259] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:45.866 ************************************ 00:18:45.866 END TEST raid_superblock_test 00:18:45.866 ************************************ 00:18:45.866 07:19:19 -- bdev/bdev_raid.sh@513 -- # return 0 00:18:45.866 00:18:45.866 real 0m11.689s 00:18:45.866 user 0m20.443s 00:18:45.866 sys 0m1.398s 00:18:45.866 07:19:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:45.866 07:19:19 -- common/autotest_common.sh@10 -- # set +x 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:18:46.126 07:19:19 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:18:46.126 07:19:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:18:46.126 07:19:19 -- common/autotest_common.sh@10 -- # set +x 00:18:46.126 ************************************ 00:18:46.126 START TEST raid_state_function_test 00:18:46.126 ************************************ 00:18:46.126 07:19:19 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid1 4 false 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@226 -- # raid_pid=125580 00:18:46.126 Process raid pid: 125580 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 125580' 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@228 -- # waitforlisten 125580 /var/tmp/spdk-raid.sock 00:18:46.126 07:19:19 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:18:46.126 07:19:19 -- common/autotest_common.sh@817 -- # '[' -z 125580 ']' 00:18:46.126 07:19:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:46.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:46.126 07:19:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:46.126 07:19:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:46.126 07:19:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:46.126 07:19:19 -- common/autotest_common.sh@10 -- # set +x 00:18:46.126 [2024-02-13 07:19:19.658687] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:18:46.126 [2024-02-13 07:19:19.658901] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.385 [2024-02-13 07:19:19.822792] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.385 [2024-02-13 07:19:20.011632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.644 [2024-02-13 07:19:20.201835] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:47.212 07:19:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:47.212 07:19:20 -- common/autotest_common.sh@850 -- # return 0 00:18:47.212 07:19:20 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:47.212 [2024-02-13 07:19:20.834640] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:47.212 [2024-02-13 07:19:20.834739] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:47.212 [2024-02-13 07:19:20.834756] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:47.212 [2024-02-13 07:19:20.834779] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:47.212 [2024-02-13 07:19:20.834804] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:47.212 [2024-02-13 07:19:20.834846] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:47.212 [2024-02-13 07:19:20.834857] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:47.212 [2024-02-13 07:19:20.834881] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:47.212 07:19:20 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:47.212 07:19:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:47.212 07:19:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:47.212 07:19:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:47.212 07:19:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:47.212 07:19:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:47.212 07:19:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:47.212 07:19:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:47.212 07:19:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:47.212 07:19:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:47.212 07:19:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:47.212 07:19:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:47.471 07:19:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:47.471 "name": "Existed_Raid", 00:18:47.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.471 "strip_size_kb": 0, 00:18:47.471 "state": "configuring", 00:18:47.471 "raid_level": "raid1", 00:18:47.471 "superblock": false, 00:18:47.471 "num_base_bdevs": 4, 00:18:47.471 "num_base_bdevs_discovered": 0, 00:18:47.471 "num_base_bdevs_operational": 4, 00:18:47.471 "base_bdevs_list": [ 00:18:47.471 { 00:18:47.471 "name": "BaseBdev1", 00:18:47.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.471 "is_configured": false, 00:18:47.471 "data_offset": 0, 00:18:47.471 "data_size": 0 00:18:47.471 }, 00:18:47.471 { 00:18:47.471 "name": "BaseBdev2", 00:18:47.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.471 "is_configured": false, 00:18:47.471 "data_offset": 0, 00:18:47.471 "data_size": 0 00:18:47.471 }, 00:18:47.471 { 00:18:47.471 "name": "BaseBdev3", 00:18:47.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.471 "is_configured": false, 00:18:47.471 "data_offset": 0, 00:18:47.471 "data_size": 0 00:18:47.471 }, 00:18:47.471 { 00:18:47.471 "name": "BaseBdev4", 00:18:47.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.471 "is_configured": false, 00:18:47.471 "data_offset": 0, 00:18:47.471 "data_size": 0 00:18:47.471 } 00:18:47.471 ] 00:18:47.471 }' 00:18:47.471 07:19:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:47.471 07:19:21 -- common/autotest_common.sh@10 -- # set +x 00:18:48.038 07:19:21 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:48.307 [2024-02-13 07:19:21.942733] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:48.307 [2024-02-13 07:19:21.942774] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:18:48.307 07:19:21 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:48.565 [2024-02-13 07:19:22.194785] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:48.565 [2024-02-13 07:19:22.194865] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:48.565 [2024-02-13 07:19:22.194904] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:48.565 [2024-02-13 07:19:22.194930] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:48.565 [2024-02-13 07:19:22.194968] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:48.565 [2024-02-13 07:19:22.195005] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:48.565 [2024-02-13 07:19:22.195015] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:48.565 [2024-02-13 07:19:22.195038] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:48.565 07:19:22 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:18:48.825 [2024-02-13 07:19:22.421135] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:48.825 BaseBdev1 00:18:48.825 07:19:22 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:18:48.825 07:19:22 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:18:48.825 07:19:22 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:48.825 07:19:22 -- common/autotest_common.sh@887 -- # local i 00:18:48.825 07:19:22 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:48.825 07:19:22 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:48.825 07:19:22 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:49.083 07:19:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:49.343 [ 00:18:49.343 { 00:18:49.343 "name": "BaseBdev1", 00:18:49.343 "aliases": [ 00:18:49.343 "9ab58494-3985-4724-8d13-8df727df6b24" 00:18:49.343 ], 00:18:49.343 "product_name": "Malloc disk", 00:18:49.343 "block_size": 512, 00:18:49.343 "num_blocks": 65536, 00:18:49.343 "uuid": "9ab58494-3985-4724-8d13-8df727df6b24", 00:18:49.343 "assigned_rate_limits": { 00:18:49.343 "rw_ios_per_sec": 0, 00:18:49.343 "rw_mbytes_per_sec": 0, 00:18:49.343 "r_mbytes_per_sec": 0, 00:18:49.343 "w_mbytes_per_sec": 0 00:18:49.343 }, 00:18:49.343 "claimed": true, 00:18:49.343 "claim_type": "exclusive_write", 00:18:49.343 "zoned": false, 00:18:49.343 "supported_io_types": { 00:18:49.343 "read": true, 00:18:49.343 "write": true, 00:18:49.343 "unmap": true, 00:18:49.343 "write_zeroes": true, 00:18:49.343 "flush": true, 00:18:49.343 "reset": true, 00:18:49.343 "compare": false, 00:18:49.343 "compare_and_write": false, 00:18:49.343 "abort": true, 00:18:49.343 "nvme_admin": false, 00:18:49.343 "nvme_io": false 00:18:49.343 }, 00:18:49.343 "memory_domains": [ 00:18:49.343 { 00:18:49.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:49.343 "dma_device_type": 2 00:18:49.343 } 00:18:49.343 ], 00:18:49.343 "driver_specific": {} 00:18:49.343 } 00:18:49.343 ] 00:18:49.343 07:19:22 -- common/autotest_common.sh@893 -- # return 0 00:18:49.343 07:19:22 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:49.343 07:19:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:49.343 07:19:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:49.343 07:19:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:49.343 07:19:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:49.343 07:19:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:49.343 07:19:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:49.343 07:19:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:49.343 07:19:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:49.343 07:19:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:49.343 07:19:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:49.343 07:19:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:49.602 07:19:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:49.602 "name": "Existed_Raid", 00:18:49.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.602 "strip_size_kb": 0, 00:18:49.602 "state": "configuring", 00:18:49.602 "raid_level": "raid1", 00:18:49.602 "superblock": false, 00:18:49.602 "num_base_bdevs": 4, 00:18:49.602 "num_base_bdevs_discovered": 1, 00:18:49.602 "num_base_bdevs_operational": 4, 00:18:49.602 "base_bdevs_list": [ 00:18:49.602 { 00:18:49.602 "name": "BaseBdev1", 00:18:49.602 "uuid": "9ab58494-3985-4724-8d13-8df727df6b24", 00:18:49.602 "is_configured": true, 00:18:49.602 "data_offset": 0, 00:18:49.602 "data_size": 65536 00:18:49.602 }, 00:18:49.602 { 00:18:49.602 "name": "BaseBdev2", 00:18:49.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.602 "is_configured": false, 00:18:49.602 "data_offset": 0, 00:18:49.602 "data_size": 0 00:18:49.602 }, 00:18:49.602 { 00:18:49.602 "name": "BaseBdev3", 00:18:49.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.602 "is_configured": false, 00:18:49.602 "data_offset": 0, 00:18:49.602 "data_size": 0 00:18:49.602 }, 00:18:49.602 { 00:18:49.602 "name": "BaseBdev4", 00:18:49.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.602 "is_configured": false, 00:18:49.602 "data_offset": 0, 00:18:49.602 "data_size": 0 00:18:49.602 } 00:18:49.602 ] 00:18:49.602 }' 00:18:49.602 07:19:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:49.602 07:19:23 -- common/autotest_common.sh@10 -- # set +x 00:18:50.169 07:19:23 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:18:50.427 [2024-02-13 07:19:23.997504] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:50.427 [2024-02-13 07:19:23.997567] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:18:50.427 07:19:24 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:18:50.427 07:19:24 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:18:50.686 [2024-02-13 07:19:24.245670] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:50.686 [2024-02-13 07:19:24.247693] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:50.686 [2024-02-13 07:19:24.247784] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:50.686 [2024-02-13 07:19:24.247799] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:50.686 [2024-02-13 07:19:24.247829] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:50.686 [2024-02-13 07:19:24.247839] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:50.686 [2024-02-13 07:19:24.247859] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:50.686 07:19:24 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:18:50.686 07:19:24 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:50.686 07:19:24 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:50.686 07:19:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:50.686 07:19:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:50.686 07:19:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:50.686 07:19:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:50.686 07:19:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:50.686 07:19:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:50.686 07:19:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:50.686 07:19:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:50.686 07:19:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:50.686 07:19:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:50.686 07:19:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:50.945 07:19:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:50.945 "name": "Existed_Raid", 00:18:50.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.945 "strip_size_kb": 0, 00:18:50.945 "state": "configuring", 00:18:50.945 "raid_level": "raid1", 00:18:50.945 "superblock": false, 00:18:50.945 "num_base_bdevs": 4, 00:18:50.945 "num_base_bdevs_discovered": 1, 00:18:50.945 "num_base_bdevs_operational": 4, 00:18:50.945 "base_bdevs_list": [ 00:18:50.945 { 00:18:50.945 "name": "BaseBdev1", 00:18:50.945 "uuid": "9ab58494-3985-4724-8d13-8df727df6b24", 00:18:50.945 "is_configured": true, 00:18:50.945 "data_offset": 0, 00:18:50.945 "data_size": 65536 00:18:50.945 }, 00:18:50.945 { 00:18:50.945 "name": "BaseBdev2", 00:18:50.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.945 "is_configured": false, 00:18:50.945 "data_offset": 0, 00:18:50.945 "data_size": 0 00:18:50.945 }, 00:18:50.945 { 00:18:50.945 "name": "BaseBdev3", 00:18:50.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.945 "is_configured": false, 00:18:50.945 "data_offset": 0, 00:18:50.945 "data_size": 0 00:18:50.945 }, 00:18:50.945 { 00:18:50.945 "name": "BaseBdev4", 00:18:50.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.945 "is_configured": false, 00:18:50.945 "data_offset": 0, 00:18:50.945 "data_size": 0 00:18:50.945 } 00:18:50.945 ] 00:18:50.945 }' 00:18:50.945 07:19:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:50.945 07:19:24 -- common/autotest_common.sh@10 -- # set +x 00:18:51.881 07:19:25 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:18:51.881 [2024-02-13 07:19:25.455287] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:51.881 BaseBdev2 00:18:51.881 07:19:25 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:18:51.881 07:19:25 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:18:51.881 07:19:25 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:51.881 07:19:25 -- common/autotest_common.sh@887 -- # local i 00:18:51.881 07:19:25 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:51.881 07:19:25 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:51.881 07:19:25 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:52.138 07:19:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:52.396 [ 00:18:52.396 { 00:18:52.396 "name": "BaseBdev2", 00:18:52.396 "aliases": [ 00:18:52.396 "73afee83-af77-4b83-a932-9f386ab1049f" 00:18:52.396 ], 00:18:52.396 "product_name": "Malloc disk", 00:18:52.396 "block_size": 512, 00:18:52.396 "num_blocks": 65536, 00:18:52.396 "uuid": "73afee83-af77-4b83-a932-9f386ab1049f", 00:18:52.396 "assigned_rate_limits": { 00:18:52.396 "rw_ios_per_sec": 0, 00:18:52.396 "rw_mbytes_per_sec": 0, 00:18:52.396 "r_mbytes_per_sec": 0, 00:18:52.396 "w_mbytes_per_sec": 0 00:18:52.396 }, 00:18:52.396 "claimed": true, 00:18:52.396 "claim_type": "exclusive_write", 00:18:52.396 "zoned": false, 00:18:52.396 "supported_io_types": { 00:18:52.396 "read": true, 00:18:52.397 "write": true, 00:18:52.397 "unmap": true, 00:18:52.397 "write_zeroes": true, 00:18:52.397 "flush": true, 00:18:52.397 "reset": true, 00:18:52.397 "compare": false, 00:18:52.397 "compare_and_write": false, 00:18:52.397 "abort": true, 00:18:52.397 "nvme_admin": false, 00:18:52.397 "nvme_io": false 00:18:52.397 }, 00:18:52.397 "memory_domains": [ 00:18:52.397 { 00:18:52.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:52.397 "dma_device_type": 2 00:18:52.397 } 00:18:52.397 ], 00:18:52.397 "driver_specific": {} 00:18:52.397 } 00:18:52.397 ] 00:18:52.397 07:19:25 -- common/autotest_common.sh@893 -- # return 0 00:18:52.397 07:19:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:52.397 07:19:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:52.397 07:19:25 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:52.397 07:19:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:52.397 07:19:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:52.397 07:19:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:52.397 07:19:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:52.397 07:19:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:52.397 07:19:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:52.397 07:19:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:52.397 07:19:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:52.397 07:19:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:52.397 07:19:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:52.397 07:19:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:52.655 07:19:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:52.655 "name": "Existed_Raid", 00:18:52.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.655 "strip_size_kb": 0, 00:18:52.655 "state": "configuring", 00:18:52.655 "raid_level": "raid1", 00:18:52.655 "superblock": false, 00:18:52.655 "num_base_bdevs": 4, 00:18:52.655 "num_base_bdevs_discovered": 2, 00:18:52.655 "num_base_bdevs_operational": 4, 00:18:52.655 "base_bdevs_list": [ 00:18:52.655 { 00:18:52.655 "name": "BaseBdev1", 00:18:52.655 "uuid": "9ab58494-3985-4724-8d13-8df727df6b24", 00:18:52.655 "is_configured": true, 00:18:52.655 "data_offset": 0, 00:18:52.655 "data_size": 65536 00:18:52.655 }, 00:18:52.655 { 00:18:52.655 "name": "BaseBdev2", 00:18:52.655 "uuid": "73afee83-af77-4b83-a932-9f386ab1049f", 00:18:52.655 "is_configured": true, 00:18:52.655 "data_offset": 0, 00:18:52.655 "data_size": 65536 00:18:52.655 }, 00:18:52.655 { 00:18:52.655 "name": "BaseBdev3", 00:18:52.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.655 "is_configured": false, 00:18:52.655 "data_offset": 0, 00:18:52.655 "data_size": 0 00:18:52.655 }, 00:18:52.655 { 00:18:52.655 "name": "BaseBdev4", 00:18:52.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:52.655 "is_configured": false, 00:18:52.655 "data_offset": 0, 00:18:52.655 "data_size": 0 00:18:52.655 } 00:18:52.655 ] 00:18:52.655 }' 00:18:52.655 07:19:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:52.655 07:19:26 -- common/autotest_common.sh@10 -- # set +x 00:18:53.220 07:19:26 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:18:53.478 [2024-02-13 07:19:27.073817] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:53.478 BaseBdev3 00:18:53.478 07:19:27 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:18:53.478 07:19:27 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:18:53.478 07:19:27 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:53.478 07:19:27 -- common/autotest_common.sh@887 -- # local i 00:18:53.478 07:19:27 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:53.478 07:19:27 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:53.478 07:19:27 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:53.736 07:19:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:53.995 [ 00:18:53.995 { 00:18:53.995 "name": "BaseBdev3", 00:18:53.995 "aliases": [ 00:18:53.995 "b5e00ac1-eb2b-41a2-9552-d10c7530f846" 00:18:53.995 ], 00:18:53.995 "product_name": "Malloc disk", 00:18:53.995 "block_size": 512, 00:18:53.995 "num_blocks": 65536, 00:18:53.995 "uuid": "b5e00ac1-eb2b-41a2-9552-d10c7530f846", 00:18:53.995 "assigned_rate_limits": { 00:18:53.995 "rw_ios_per_sec": 0, 00:18:53.995 "rw_mbytes_per_sec": 0, 00:18:53.995 "r_mbytes_per_sec": 0, 00:18:53.995 "w_mbytes_per_sec": 0 00:18:53.995 }, 00:18:53.995 "claimed": true, 00:18:53.995 "claim_type": "exclusive_write", 00:18:53.995 "zoned": false, 00:18:53.995 "supported_io_types": { 00:18:53.995 "read": true, 00:18:53.995 "write": true, 00:18:53.995 "unmap": true, 00:18:53.995 "write_zeroes": true, 00:18:53.995 "flush": true, 00:18:53.995 "reset": true, 00:18:53.995 "compare": false, 00:18:53.995 "compare_and_write": false, 00:18:53.995 "abort": true, 00:18:53.995 "nvme_admin": false, 00:18:53.995 "nvme_io": false 00:18:53.995 }, 00:18:53.995 "memory_domains": [ 00:18:53.995 { 00:18:53.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:53.995 "dma_device_type": 2 00:18:53.995 } 00:18:53.995 ], 00:18:53.995 "driver_specific": {} 00:18:53.995 } 00:18:53.995 ] 00:18:53.995 07:19:27 -- common/autotest_common.sh@893 -- # return 0 00:18:53.995 07:19:27 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:53.995 07:19:27 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:53.995 07:19:27 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:18:53.995 07:19:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:53.995 07:19:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:18:53.995 07:19:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:53.995 07:19:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:53.995 07:19:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:53.995 07:19:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:53.995 07:19:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:53.995 07:19:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:53.995 07:19:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:53.995 07:19:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:53.995 07:19:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:54.253 07:19:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:54.253 "name": "Existed_Raid", 00:18:54.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.253 "strip_size_kb": 0, 00:18:54.253 "state": "configuring", 00:18:54.253 "raid_level": "raid1", 00:18:54.253 "superblock": false, 00:18:54.253 "num_base_bdevs": 4, 00:18:54.253 "num_base_bdevs_discovered": 3, 00:18:54.253 "num_base_bdevs_operational": 4, 00:18:54.253 "base_bdevs_list": [ 00:18:54.253 { 00:18:54.253 "name": "BaseBdev1", 00:18:54.253 "uuid": "9ab58494-3985-4724-8d13-8df727df6b24", 00:18:54.253 "is_configured": true, 00:18:54.253 "data_offset": 0, 00:18:54.253 "data_size": 65536 00:18:54.253 }, 00:18:54.253 { 00:18:54.253 "name": "BaseBdev2", 00:18:54.253 "uuid": "73afee83-af77-4b83-a932-9f386ab1049f", 00:18:54.253 "is_configured": true, 00:18:54.253 "data_offset": 0, 00:18:54.253 "data_size": 65536 00:18:54.253 }, 00:18:54.253 { 00:18:54.253 "name": "BaseBdev3", 00:18:54.253 "uuid": "b5e00ac1-eb2b-41a2-9552-d10c7530f846", 00:18:54.253 "is_configured": true, 00:18:54.253 "data_offset": 0, 00:18:54.253 "data_size": 65536 00:18:54.253 }, 00:18:54.253 { 00:18:54.253 "name": "BaseBdev4", 00:18:54.253 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:54.253 "is_configured": false, 00:18:54.253 "data_offset": 0, 00:18:54.253 "data_size": 0 00:18:54.253 } 00:18:54.253 ] 00:18:54.254 }' 00:18:54.254 07:19:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:54.254 07:19:27 -- common/autotest_common.sh@10 -- # set +x 00:18:54.819 07:19:28 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:18:55.077 [2024-02-13 07:19:28.677383] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:55.077 [2024-02-13 07:19:28.677464] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:18:55.077 [2024-02-13 07:19:28.677492] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:18:55.077 [2024-02-13 07:19:28.677646] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:18:55.077 [2024-02-13 07:19:28.678069] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:18:55.077 [2024-02-13 07:19:28.678097] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:18:55.077 [2024-02-13 07:19:28.678433] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.077 BaseBdev4 00:18:55.077 07:19:28 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:18:55.077 07:19:28 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:18:55.077 07:19:28 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:18:55.077 07:19:28 -- common/autotest_common.sh@887 -- # local i 00:18:55.077 07:19:28 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:18:55.077 07:19:28 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:18:55.077 07:19:28 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:18:55.336 07:19:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:55.595 [ 00:18:55.595 { 00:18:55.595 "name": "BaseBdev4", 00:18:55.595 "aliases": [ 00:18:55.595 "35960415-1683-4c9d-ae11-cee09d5204fc" 00:18:55.595 ], 00:18:55.595 "product_name": "Malloc disk", 00:18:55.595 "block_size": 512, 00:18:55.595 "num_blocks": 65536, 00:18:55.595 "uuid": "35960415-1683-4c9d-ae11-cee09d5204fc", 00:18:55.595 "assigned_rate_limits": { 00:18:55.595 "rw_ios_per_sec": 0, 00:18:55.595 "rw_mbytes_per_sec": 0, 00:18:55.595 "r_mbytes_per_sec": 0, 00:18:55.595 "w_mbytes_per_sec": 0 00:18:55.595 }, 00:18:55.595 "claimed": true, 00:18:55.595 "claim_type": "exclusive_write", 00:18:55.595 "zoned": false, 00:18:55.595 "supported_io_types": { 00:18:55.595 "read": true, 00:18:55.595 "write": true, 00:18:55.595 "unmap": true, 00:18:55.595 "write_zeroes": true, 00:18:55.595 "flush": true, 00:18:55.595 "reset": true, 00:18:55.595 "compare": false, 00:18:55.595 "compare_and_write": false, 00:18:55.595 "abort": true, 00:18:55.595 "nvme_admin": false, 00:18:55.595 "nvme_io": false 00:18:55.595 }, 00:18:55.595 "memory_domains": [ 00:18:55.595 { 00:18:55.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:55.595 "dma_device_type": 2 00:18:55.595 } 00:18:55.595 ], 00:18:55.595 "driver_specific": {} 00:18:55.595 } 00:18:55.595 ] 00:18:55.595 07:19:29 -- common/autotest_common.sh@893 -- # return 0 00:18:55.595 07:19:29 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:18:55.595 07:19:29 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:18:55.595 07:19:29 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:18:55.595 07:19:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:55.595 07:19:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:55.595 07:19:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:55.595 07:19:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:55.595 07:19:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:18:55.595 07:19:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:55.595 07:19:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:55.595 07:19:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:55.595 07:19:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:55.595 07:19:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:55.595 07:19:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:55.854 07:19:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:55.854 "name": "Existed_Raid", 00:18:55.854 "uuid": "57824633-3510-4414-b31c-09708f17cd3c", 00:18:55.854 "strip_size_kb": 0, 00:18:55.854 "state": "online", 00:18:55.854 "raid_level": "raid1", 00:18:55.854 "superblock": false, 00:18:55.854 "num_base_bdevs": 4, 00:18:55.854 "num_base_bdevs_discovered": 4, 00:18:55.854 "num_base_bdevs_operational": 4, 00:18:55.854 "base_bdevs_list": [ 00:18:55.854 { 00:18:55.854 "name": "BaseBdev1", 00:18:55.854 "uuid": "9ab58494-3985-4724-8d13-8df727df6b24", 00:18:55.854 "is_configured": true, 00:18:55.854 "data_offset": 0, 00:18:55.854 "data_size": 65536 00:18:55.854 }, 00:18:55.854 { 00:18:55.854 "name": "BaseBdev2", 00:18:55.854 "uuid": "73afee83-af77-4b83-a932-9f386ab1049f", 00:18:55.854 "is_configured": true, 00:18:55.854 "data_offset": 0, 00:18:55.854 "data_size": 65536 00:18:55.854 }, 00:18:55.854 { 00:18:55.854 "name": "BaseBdev3", 00:18:55.854 "uuid": "b5e00ac1-eb2b-41a2-9552-d10c7530f846", 00:18:55.854 "is_configured": true, 00:18:55.854 "data_offset": 0, 00:18:55.854 "data_size": 65536 00:18:55.854 }, 00:18:55.854 { 00:18:55.854 "name": "BaseBdev4", 00:18:55.854 "uuid": "35960415-1683-4c9d-ae11-cee09d5204fc", 00:18:55.854 "is_configured": true, 00:18:55.854 "data_offset": 0, 00:18:55.854 "data_size": 65536 00:18:55.854 } 00:18:55.854 ] 00:18:55.854 }' 00:18:55.854 07:19:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:55.854 07:19:29 -- common/autotest_common.sh@10 -- # set +x 00:18:56.421 07:19:30 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:18:56.680 [2024-02-13 07:19:30.253927] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:56.680 07:19:30 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:18:56.680 07:19:30 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:18:56.680 07:19:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:18:56.680 07:19:30 -- bdev/bdev_raid.sh@196 -- # return 0 00:18:56.680 07:19:30 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:18:56.680 07:19:30 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:18:56.680 07:19:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:18:56.680 07:19:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:18:56.680 07:19:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:18:56.680 07:19:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:18:56.680 07:19:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:18:56.680 07:19:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:18:56.680 07:19:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:18:56.680 07:19:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:18:56.680 07:19:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:18:56.680 07:19:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:56.680 07:19:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:56.939 07:19:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:18:56.939 "name": "Existed_Raid", 00:18:56.939 "uuid": "57824633-3510-4414-b31c-09708f17cd3c", 00:18:56.939 "strip_size_kb": 0, 00:18:56.939 "state": "online", 00:18:56.939 "raid_level": "raid1", 00:18:56.939 "superblock": false, 00:18:56.939 "num_base_bdevs": 4, 00:18:56.939 "num_base_bdevs_discovered": 3, 00:18:56.939 "num_base_bdevs_operational": 3, 00:18:56.939 "base_bdevs_list": [ 00:18:56.939 { 00:18:56.939 "name": null, 00:18:56.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:56.939 "is_configured": false, 00:18:56.939 "data_offset": 0, 00:18:56.939 "data_size": 65536 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "name": "BaseBdev2", 00:18:56.939 "uuid": "73afee83-af77-4b83-a932-9f386ab1049f", 00:18:56.939 "is_configured": true, 00:18:56.939 "data_offset": 0, 00:18:56.939 "data_size": 65536 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "name": "BaseBdev3", 00:18:56.939 "uuid": "b5e00ac1-eb2b-41a2-9552-d10c7530f846", 00:18:56.939 "is_configured": true, 00:18:56.939 "data_offset": 0, 00:18:56.939 "data_size": 65536 00:18:56.939 }, 00:18:56.939 { 00:18:56.939 "name": "BaseBdev4", 00:18:56.939 "uuid": "35960415-1683-4c9d-ae11-cee09d5204fc", 00:18:56.939 "is_configured": true, 00:18:56.939 "data_offset": 0, 00:18:56.939 "data_size": 65536 00:18:56.939 } 00:18:56.939 ] 00:18:56.939 }' 00:18:56.939 07:19:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:18:56.939 07:19:30 -- common/autotest_common.sh@10 -- # set +x 00:18:57.875 07:19:31 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:18:57.875 07:19:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:57.875 07:19:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:57.875 07:19:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:57.875 07:19:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:57.875 07:19:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:57.875 07:19:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:18:58.134 [2024-02-13 07:19:31.660333] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:58.134 07:19:31 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:58.134 07:19:31 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:58.135 07:19:31 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.135 07:19:31 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:58.393 07:19:31 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:58.393 07:19:31 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:58.393 07:19:31 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:18:58.663 [2024-02-13 07:19:32.179935] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:58.663 07:19:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:58.663 07:19:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:58.663 07:19:32 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:58.663 07:19:32 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:18:58.936 07:19:32 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:18:58.936 07:19:32 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:58.936 07:19:32 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:18:59.195 [2024-02-13 07:19:32.641265] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:59.195 [2024-02-13 07:19:32.641314] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:59.195 [2024-02-13 07:19:32.641416] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:59.195 [2024-02-13 07:19:32.708471] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:59.195 [2024-02-13 07:19:32.708518] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:18:59.195 07:19:32 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:18:59.195 07:19:32 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:18:59.195 07:19:32 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:59.195 07:19:32 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:18:59.454 07:19:32 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:18:59.454 07:19:32 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:18:59.454 07:19:32 -- bdev/bdev_raid.sh@287 -- # killprocess 125580 00:18:59.454 07:19:32 -- common/autotest_common.sh@924 -- # '[' -z 125580 ']' 00:18:59.454 07:19:32 -- common/autotest_common.sh@928 -- # kill -0 125580 00:18:59.454 07:19:32 -- common/autotest_common.sh@929 -- # uname 00:18:59.454 07:19:32 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:18:59.454 07:19:32 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 125580 00:18:59.454 07:19:32 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:18:59.454 07:19:32 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:18:59.454 07:19:32 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 125580' 00:18:59.454 killing process with pid 125580 00:18:59.454 07:19:32 -- common/autotest_common.sh@943 -- # kill 125580 00:18:59.454 07:19:32 -- common/autotest_common.sh@948 -- # wait 125580 00:18:59.454 [2024-02-13 07:19:32.954220] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:59.454 [2024-02-13 07:19:32.954351] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:00.392 ************************************ 00:19:00.392 END TEST raid_state_function_test 00:19:00.392 ************************************ 00:19:00.392 07:19:33 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:00.392 00:19:00.392 real 0m14.357s 00:19:00.392 user 0m25.879s 00:19:00.392 sys 0m1.602s 00:19:00.392 07:19:33 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:00.392 07:19:33 -- common/autotest_common.sh@10 -- # set +x 00:19:00.392 07:19:33 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:19:00.392 07:19:33 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:19:00.392 07:19:33 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:19:00.392 07:19:33 -- common/autotest_common.sh@10 -- # set +x 00:19:00.392 ************************************ 00:19:00.392 START TEST raid_state_function_test_sb 00:19:00.392 ************************************ 00:19:00.392 07:19:33 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid1 4 true 00:19:00.392 07:19:33 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:19:00.392 07:19:33 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:19:00.392 07:19:33 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:19:00.392 07:19:33 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:19:00.392 07:19:33 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:00.392 07:19:33 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:19:00.392 07:19:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:00.392 07:19:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:19:00.392 07:19:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:00.392 07:19:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:00.392 07:19:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:19:00.392 07:19:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:00.392 07:19:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:00.392 07:19:33 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@226 -- # raid_pid=126035 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 126035' 00:19:00.392 Process raid pid: 126035 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@228 -- # waitforlisten 126035 /var/tmp/spdk-raid.sock 00:19:00.392 07:19:34 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:19:00.392 07:19:34 -- common/autotest_common.sh@817 -- # '[' -z 126035 ']' 00:19:00.392 07:19:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:00.392 07:19:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:00.392 07:19:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:00.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:00.392 07:19:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:00.392 07:19:34 -- common/autotest_common.sh@10 -- # set +x 00:19:00.392 [2024-02-13 07:19:34.060276] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:19:00.392 [2024-02-13 07:19:34.060442] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:00.651 [2024-02-13 07:19:34.209522] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.910 [2024-02-13 07:19:34.454617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.168 [2024-02-13 07:19:34.666557] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:01.427 07:19:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:01.427 07:19:35 -- common/autotest_common.sh@850 -- # return 0 00:19:01.427 07:19:35 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:01.687 [2024-02-13 07:19:35.278883] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:01.687 [2024-02-13 07:19:35.279003] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:01.687 [2024-02-13 07:19:35.279017] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:01.687 [2024-02-13 07:19:35.279055] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:01.687 [2024-02-13 07:19:35.279064] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:01.687 [2024-02-13 07:19:35.279109] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:01.687 [2024-02-13 07:19:35.279118] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:01.687 [2024-02-13 07:19:35.279149] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:01.687 07:19:35 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:01.687 07:19:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:01.687 07:19:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:01.687 07:19:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:01.687 07:19:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:01.687 07:19:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:01.687 07:19:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:01.687 07:19:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:01.687 07:19:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:01.687 07:19:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:01.687 07:19:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:01.687 07:19:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:01.945 07:19:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:01.945 "name": "Existed_Raid", 00:19:01.945 "uuid": "0aac1ae1-a743-4f27-9d9c-81cd65395b03", 00:19:01.946 "strip_size_kb": 0, 00:19:01.946 "state": "configuring", 00:19:01.946 "raid_level": "raid1", 00:19:01.946 "superblock": true, 00:19:01.946 "num_base_bdevs": 4, 00:19:01.946 "num_base_bdevs_discovered": 0, 00:19:01.946 "num_base_bdevs_operational": 4, 00:19:01.946 "base_bdevs_list": [ 00:19:01.946 { 00:19:01.946 "name": "BaseBdev1", 00:19:01.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.946 "is_configured": false, 00:19:01.946 "data_offset": 0, 00:19:01.946 "data_size": 0 00:19:01.946 }, 00:19:01.946 { 00:19:01.946 "name": "BaseBdev2", 00:19:01.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.946 "is_configured": false, 00:19:01.946 "data_offset": 0, 00:19:01.946 "data_size": 0 00:19:01.946 }, 00:19:01.946 { 00:19:01.946 "name": "BaseBdev3", 00:19:01.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.946 "is_configured": false, 00:19:01.946 "data_offset": 0, 00:19:01.946 "data_size": 0 00:19:01.946 }, 00:19:01.946 { 00:19:01.946 "name": "BaseBdev4", 00:19:01.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.946 "is_configured": false, 00:19:01.946 "data_offset": 0, 00:19:01.946 "data_size": 0 00:19:01.946 } 00:19:01.946 ] 00:19:01.946 }' 00:19:01.946 07:19:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:01.946 07:19:35 -- common/autotest_common.sh@10 -- # set +x 00:19:02.882 07:19:36 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:02.882 [2024-02-13 07:19:36.470885] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:02.882 [2024-02-13 07:19:36.470935] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:19:02.882 07:19:36 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:03.141 [2024-02-13 07:19:36.718992] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:03.141 [2024-02-13 07:19:36.719068] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:03.141 [2024-02-13 07:19:36.719081] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:03.141 [2024-02-13 07:19:36.719107] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:03.141 [2024-02-13 07:19:36.719115] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:03.141 [2024-02-13 07:19:36.719181] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:03.141 [2024-02-13 07:19:36.719189] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:03.141 [2024-02-13 07:19:36.719213] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:03.141 07:19:36 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:03.400 [2024-02-13 07:19:36.956992] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:03.400 BaseBdev1 00:19:03.400 07:19:36 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:19:03.400 07:19:36 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:19:03.400 07:19:36 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:03.400 07:19:36 -- common/autotest_common.sh@887 -- # local i 00:19:03.400 07:19:36 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:03.400 07:19:36 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:03.400 07:19:36 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:03.659 07:19:37 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:03.918 [ 00:19:03.918 { 00:19:03.918 "name": "BaseBdev1", 00:19:03.918 "aliases": [ 00:19:03.918 "3090ead2-42be-42f2-a43f-88b800a80677" 00:19:03.918 ], 00:19:03.918 "product_name": "Malloc disk", 00:19:03.918 "block_size": 512, 00:19:03.918 "num_blocks": 65536, 00:19:03.918 "uuid": "3090ead2-42be-42f2-a43f-88b800a80677", 00:19:03.918 "assigned_rate_limits": { 00:19:03.918 "rw_ios_per_sec": 0, 00:19:03.918 "rw_mbytes_per_sec": 0, 00:19:03.918 "r_mbytes_per_sec": 0, 00:19:03.918 "w_mbytes_per_sec": 0 00:19:03.918 }, 00:19:03.918 "claimed": true, 00:19:03.918 "claim_type": "exclusive_write", 00:19:03.918 "zoned": false, 00:19:03.918 "supported_io_types": { 00:19:03.918 "read": true, 00:19:03.918 "write": true, 00:19:03.918 "unmap": true, 00:19:03.918 "write_zeroes": true, 00:19:03.918 "flush": true, 00:19:03.918 "reset": true, 00:19:03.918 "compare": false, 00:19:03.918 "compare_and_write": false, 00:19:03.918 "abort": true, 00:19:03.918 "nvme_admin": false, 00:19:03.918 "nvme_io": false 00:19:03.918 }, 00:19:03.918 "memory_domains": [ 00:19:03.918 { 00:19:03.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:03.918 "dma_device_type": 2 00:19:03.918 } 00:19:03.918 ], 00:19:03.918 "driver_specific": {} 00:19:03.918 } 00:19:03.918 ] 00:19:03.918 07:19:37 -- common/autotest_common.sh@893 -- # return 0 00:19:03.919 07:19:37 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:03.919 07:19:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:03.919 07:19:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:03.919 07:19:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:03.919 07:19:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:03.919 07:19:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:03.919 07:19:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:03.919 07:19:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:03.919 07:19:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:03.919 07:19:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:03.919 07:19:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:03.919 07:19:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:03.919 07:19:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:03.919 "name": "Existed_Raid", 00:19:03.919 "uuid": "c48b5a4c-28ed-422a-8f97-c8c899f58302", 00:19:03.919 "strip_size_kb": 0, 00:19:03.919 "state": "configuring", 00:19:03.919 "raid_level": "raid1", 00:19:03.919 "superblock": true, 00:19:03.919 "num_base_bdevs": 4, 00:19:03.919 "num_base_bdevs_discovered": 1, 00:19:03.919 "num_base_bdevs_operational": 4, 00:19:03.919 "base_bdevs_list": [ 00:19:03.919 { 00:19:03.919 "name": "BaseBdev1", 00:19:03.919 "uuid": "3090ead2-42be-42f2-a43f-88b800a80677", 00:19:03.919 "is_configured": true, 00:19:03.919 "data_offset": 2048, 00:19:03.919 "data_size": 63488 00:19:03.919 }, 00:19:03.919 { 00:19:03.919 "name": "BaseBdev2", 00:19:03.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.919 "is_configured": false, 00:19:03.919 "data_offset": 0, 00:19:03.919 "data_size": 0 00:19:03.919 }, 00:19:03.919 { 00:19:03.919 "name": "BaseBdev3", 00:19:03.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.919 "is_configured": false, 00:19:03.919 "data_offset": 0, 00:19:03.919 "data_size": 0 00:19:03.919 }, 00:19:03.919 { 00:19:03.919 "name": "BaseBdev4", 00:19:03.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.919 "is_configured": false, 00:19:03.919 "data_offset": 0, 00:19:03.919 "data_size": 0 00:19:03.919 } 00:19:03.919 ] 00:19:03.919 }' 00:19:03.919 07:19:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:03.919 07:19:37 -- common/autotest_common.sh@10 -- # set +x 00:19:04.854 07:19:38 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:19:04.854 [2024-02-13 07:19:38.501347] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:04.854 [2024-02-13 07:19:38.501397] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:19:04.854 07:19:38 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:19:04.854 07:19:38 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:05.422 07:19:38 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:05.422 BaseBdev1 00:19:05.422 07:19:39 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:19:05.422 07:19:39 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:19:05.422 07:19:39 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:05.422 07:19:39 -- common/autotest_common.sh@887 -- # local i 00:19:05.422 07:19:39 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:05.422 07:19:39 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:05.422 07:19:39 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:05.680 07:19:39 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:05.939 [ 00:19:05.939 { 00:19:05.939 "name": "BaseBdev1", 00:19:05.939 "aliases": [ 00:19:05.939 "d5656366-7c30-48a2-a3b5-aefe6a9b247f" 00:19:05.939 ], 00:19:05.939 "product_name": "Malloc disk", 00:19:05.939 "block_size": 512, 00:19:05.939 "num_blocks": 65536, 00:19:05.939 "uuid": "d5656366-7c30-48a2-a3b5-aefe6a9b247f", 00:19:05.939 "assigned_rate_limits": { 00:19:05.939 "rw_ios_per_sec": 0, 00:19:05.939 "rw_mbytes_per_sec": 0, 00:19:05.939 "r_mbytes_per_sec": 0, 00:19:05.940 "w_mbytes_per_sec": 0 00:19:05.940 }, 00:19:05.940 "claimed": false, 00:19:05.940 "zoned": false, 00:19:05.940 "supported_io_types": { 00:19:05.940 "read": true, 00:19:05.940 "write": true, 00:19:05.940 "unmap": true, 00:19:05.940 "write_zeroes": true, 00:19:05.940 "flush": true, 00:19:05.940 "reset": true, 00:19:05.940 "compare": false, 00:19:05.940 "compare_and_write": false, 00:19:05.940 "abort": true, 00:19:05.940 "nvme_admin": false, 00:19:05.940 "nvme_io": false 00:19:05.940 }, 00:19:05.940 "memory_domains": [ 00:19:05.940 { 00:19:05.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:05.940 "dma_device_type": 2 00:19:05.940 } 00:19:05.940 ], 00:19:05.940 "driver_specific": {} 00:19:05.940 } 00:19:05.940 ] 00:19:05.940 07:19:39 -- common/autotest_common.sh@893 -- # return 0 00:19:05.940 07:19:39 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:19:06.199 [2024-02-13 07:19:39.809248] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:06.199 [2024-02-13 07:19:39.811372] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:06.199 [2024-02-13 07:19:39.811509] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:06.199 [2024-02-13 07:19:39.811539] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:19:06.199 [2024-02-13 07:19:39.811581] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:19:06.199 [2024-02-13 07:19:39.811590] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:19:06.199 [2024-02-13 07:19:39.811608] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:19:06.199 07:19:39 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:19:06.199 07:19:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:06.199 07:19:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:06.199 07:19:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:06.199 07:19:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:06.199 07:19:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:06.199 07:19:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:06.199 07:19:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:06.199 07:19:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:06.199 07:19:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:06.199 07:19:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:06.199 07:19:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:06.199 07:19:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:06.199 07:19:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:06.458 07:19:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:06.458 "name": "Existed_Raid", 00:19:06.458 "uuid": "ec977e74-792b-4620-90fb-9702e23dff60", 00:19:06.458 "strip_size_kb": 0, 00:19:06.458 "state": "configuring", 00:19:06.458 "raid_level": "raid1", 00:19:06.458 "superblock": true, 00:19:06.458 "num_base_bdevs": 4, 00:19:06.458 "num_base_bdevs_discovered": 1, 00:19:06.458 "num_base_bdevs_operational": 4, 00:19:06.458 "base_bdevs_list": [ 00:19:06.458 { 00:19:06.458 "name": "BaseBdev1", 00:19:06.458 "uuid": "d5656366-7c30-48a2-a3b5-aefe6a9b247f", 00:19:06.458 "is_configured": true, 00:19:06.458 "data_offset": 2048, 00:19:06.458 "data_size": 63488 00:19:06.458 }, 00:19:06.458 { 00:19:06.458 "name": "BaseBdev2", 00:19:06.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.458 "is_configured": false, 00:19:06.458 "data_offset": 0, 00:19:06.458 "data_size": 0 00:19:06.458 }, 00:19:06.458 { 00:19:06.458 "name": "BaseBdev3", 00:19:06.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.458 "is_configured": false, 00:19:06.458 "data_offset": 0, 00:19:06.458 "data_size": 0 00:19:06.458 }, 00:19:06.458 { 00:19:06.458 "name": "BaseBdev4", 00:19:06.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.458 "is_configured": false, 00:19:06.458 "data_offset": 0, 00:19:06.458 "data_size": 0 00:19:06.458 } 00:19:06.458 ] 00:19:06.458 }' 00:19:06.458 07:19:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:06.458 07:19:40 -- common/autotest_common.sh@10 -- # set +x 00:19:07.394 07:19:40 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:07.394 [2024-02-13 07:19:40.975578] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:07.394 BaseBdev2 00:19:07.394 07:19:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:19:07.394 07:19:40 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:19:07.394 07:19:40 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:07.394 07:19:40 -- common/autotest_common.sh@887 -- # local i 00:19:07.394 07:19:40 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:07.394 07:19:40 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:07.394 07:19:40 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:07.653 07:19:41 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:07.912 [ 00:19:07.912 { 00:19:07.912 "name": "BaseBdev2", 00:19:07.912 "aliases": [ 00:19:07.912 "b9423e3b-567e-46b7-a458-94e58b8cd2bb" 00:19:07.912 ], 00:19:07.912 "product_name": "Malloc disk", 00:19:07.912 "block_size": 512, 00:19:07.912 "num_blocks": 65536, 00:19:07.912 "uuid": "b9423e3b-567e-46b7-a458-94e58b8cd2bb", 00:19:07.912 "assigned_rate_limits": { 00:19:07.912 "rw_ios_per_sec": 0, 00:19:07.912 "rw_mbytes_per_sec": 0, 00:19:07.912 "r_mbytes_per_sec": 0, 00:19:07.912 "w_mbytes_per_sec": 0 00:19:07.912 }, 00:19:07.912 "claimed": true, 00:19:07.912 "claim_type": "exclusive_write", 00:19:07.912 "zoned": false, 00:19:07.912 "supported_io_types": { 00:19:07.912 "read": true, 00:19:07.912 "write": true, 00:19:07.912 "unmap": true, 00:19:07.912 "write_zeroes": true, 00:19:07.912 "flush": true, 00:19:07.912 "reset": true, 00:19:07.912 "compare": false, 00:19:07.912 "compare_and_write": false, 00:19:07.912 "abort": true, 00:19:07.912 "nvme_admin": false, 00:19:07.912 "nvme_io": false 00:19:07.912 }, 00:19:07.912 "memory_domains": [ 00:19:07.912 { 00:19:07.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:07.912 "dma_device_type": 2 00:19:07.912 } 00:19:07.912 ], 00:19:07.912 "driver_specific": {} 00:19:07.912 } 00:19:07.912 ] 00:19:07.912 07:19:41 -- common/autotest_common.sh@893 -- # return 0 00:19:07.912 07:19:41 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:07.912 07:19:41 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:07.912 07:19:41 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:07.912 07:19:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:07.912 07:19:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:07.912 07:19:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:07.912 07:19:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:07.912 07:19:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:07.912 07:19:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:07.912 07:19:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:07.912 07:19:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:07.912 07:19:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:07.912 07:19:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:07.912 07:19:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:08.170 07:19:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:08.170 "name": "Existed_Raid", 00:19:08.170 "uuid": "ec977e74-792b-4620-90fb-9702e23dff60", 00:19:08.170 "strip_size_kb": 0, 00:19:08.170 "state": "configuring", 00:19:08.170 "raid_level": "raid1", 00:19:08.170 "superblock": true, 00:19:08.170 "num_base_bdevs": 4, 00:19:08.170 "num_base_bdevs_discovered": 2, 00:19:08.170 "num_base_bdevs_operational": 4, 00:19:08.170 "base_bdevs_list": [ 00:19:08.170 { 00:19:08.170 "name": "BaseBdev1", 00:19:08.170 "uuid": "d5656366-7c30-48a2-a3b5-aefe6a9b247f", 00:19:08.170 "is_configured": true, 00:19:08.170 "data_offset": 2048, 00:19:08.170 "data_size": 63488 00:19:08.170 }, 00:19:08.170 { 00:19:08.170 "name": "BaseBdev2", 00:19:08.170 "uuid": "b9423e3b-567e-46b7-a458-94e58b8cd2bb", 00:19:08.170 "is_configured": true, 00:19:08.170 "data_offset": 2048, 00:19:08.170 "data_size": 63488 00:19:08.170 }, 00:19:08.170 { 00:19:08.170 "name": "BaseBdev3", 00:19:08.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.170 "is_configured": false, 00:19:08.170 "data_offset": 0, 00:19:08.170 "data_size": 0 00:19:08.170 }, 00:19:08.170 { 00:19:08.170 "name": "BaseBdev4", 00:19:08.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.170 "is_configured": false, 00:19:08.170 "data_offset": 0, 00:19:08.170 "data_size": 0 00:19:08.170 } 00:19:08.170 ] 00:19:08.170 }' 00:19:08.170 07:19:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:08.170 07:19:41 -- common/autotest_common.sh@10 -- # set +x 00:19:08.737 07:19:42 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:19:08.999 [2024-02-13 07:19:42.625453] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:08.999 BaseBdev3 00:19:08.999 07:19:42 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:19:08.999 07:19:42 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:19:08.999 07:19:42 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:08.999 07:19:42 -- common/autotest_common.sh@887 -- # local i 00:19:08.999 07:19:42 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:08.999 07:19:42 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:08.999 07:19:42 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:09.290 07:19:42 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:19:09.552 [ 00:19:09.552 { 00:19:09.552 "name": "BaseBdev3", 00:19:09.552 "aliases": [ 00:19:09.552 "ae377479-c2f6-4dc3-b4f3-178465d04153" 00:19:09.552 ], 00:19:09.552 "product_name": "Malloc disk", 00:19:09.552 "block_size": 512, 00:19:09.552 "num_blocks": 65536, 00:19:09.552 "uuid": "ae377479-c2f6-4dc3-b4f3-178465d04153", 00:19:09.552 "assigned_rate_limits": { 00:19:09.552 "rw_ios_per_sec": 0, 00:19:09.552 "rw_mbytes_per_sec": 0, 00:19:09.552 "r_mbytes_per_sec": 0, 00:19:09.552 "w_mbytes_per_sec": 0 00:19:09.552 }, 00:19:09.552 "claimed": true, 00:19:09.552 "claim_type": "exclusive_write", 00:19:09.552 "zoned": false, 00:19:09.552 "supported_io_types": { 00:19:09.552 "read": true, 00:19:09.552 "write": true, 00:19:09.552 "unmap": true, 00:19:09.552 "write_zeroes": true, 00:19:09.552 "flush": true, 00:19:09.552 "reset": true, 00:19:09.552 "compare": false, 00:19:09.552 "compare_and_write": false, 00:19:09.552 "abort": true, 00:19:09.552 "nvme_admin": false, 00:19:09.552 "nvme_io": false 00:19:09.552 }, 00:19:09.552 "memory_domains": [ 00:19:09.552 { 00:19:09.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:09.552 "dma_device_type": 2 00:19:09.552 } 00:19:09.552 ], 00:19:09.552 "driver_specific": {} 00:19:09.552 } 00:19:09.552 ] 00:19:09.552 07:19:43 -- common/autotest_common.sh@893 -- # return 0 00:19:09.552 07:19:43 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:09.552 07:19:43 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:09.552 07:19:43 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:19:09.552 07:19:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:09.552 07:19:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:09.552 07:19:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:09.552 07:19:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:09.552 07:19:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:09.552 07:19:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:09.552 07:19:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:09.552 07:19:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:09.552 07:19:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:09.552 07:19:43 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:09.552 07:19:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:09.810 07:19:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:09.810 "name": "Existed_Raid", 00:19:09.810 "uuid": "ec977e74-792b-4620-90fb-9702e23dff60", 00:19:09.810 "strip_size_kb": 0, 00:19:09.810 "state": "configuring", 00:19:09.810 "raid_level": "raid1", 00:19:09.810 "superblock": true, 00:19:09.810 "num_base_bdevs": 4, 00:19:09.810 "num_base_bdevs_discovered": 3, 00:19:09.810 "num_base_bdevs_operational": 4, 00:19:09.810 "base_bdevs_list": [ 00:19:09.810 { 00:19:09.810 "name": "BaseBdev1", 00:19:09.810 "uuid": "d5656366-7c30-48a2-a3b5-aefe6a9b247f", 00:19:09.810 "is_configured": true, 00:19:09.810 "data_offset": 2048, 00:19:09.810 "data_size": 63488 00:19:09.810 }, 00:19:09.810 { 00:19:09.810 "name": "BaseBdev2", 00:19:09.810 "uuid": "b9423e3b-567e-46b7-a458-94e58b8cd2bb", 00:19:09.810 "is_configured": true, 00:19:09.810 "data_offset": 2048, 00:19:09.810 "data_size": 63488 00:19:09.810 }, 00:19:09.810 { 00:19:09.810 "name": "BaseBdev3", 00:19:09.810 "uuid": "ae377479-c2f6-4dc3-b4f3-178465d04153", 00:19:09.810 "is_configured": true, 00:19:09.810 "data_offset": 2048, 00:19:09.810 "data_size": 63488 00:19:09.810 }, 00:19:09.810 { 00:19:09.810 "name": "BaseBdev4", 00:19:09.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.810 "is_configured": false, 00:19:09.810 "data_offset": 0, 00:19:09.810 "data_size": 0 00:19:09.810 } 00:19:09.810 ] 00:19:09.810 }' 00:19:09.810 07:19:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:09.810 07:19:43 -- common/autotest_common.sh@10 -- # set +x 00:19:10.746 07:19:44 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:19:10.746 [2024-02-13 07:19:44.378135] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:10.746 [2024-02-13 07:19:44.378439] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:19:10.746 [2024-02-13 07:19:44.378454] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:10.746 [2024-02-13 07:19:44.378622] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:19:10.746 BaseBdev4 00:19:10.746 [2024-02-13 07:19:44.379015] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:19:10.746 [2024-02-13 07:19:44.379037] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:19:10.746 [2024-02-13 07:19:44.379218] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:10.746 07:19:44 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:19:10.746 07:19:44 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:19:10.746 07:19:44 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:19:10.746 07:19:44 -- common/autotest_common.sh@887 -- # local i 00:19:10.746 07:19:44 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:19:10.746 07:19:44 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:19:10.746 07:19:44 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:19:11.004 07:19:44 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:19:11.262 [ 00:19:11.262 { 00:19:11.262 "name": "BaseBdev4", 00:19:11.262 "aliases": [ 00:19:11.262 "f3456481-a1c7-410a-b8b9-cbdef6172a7a" 00:19:11.262 ], 00:19:11.262 "product_name": "Malloc disk", 00:19:11.262 "block_size": 512, 00:19:11.262 "num_blocks": 65536, 00:19:11.262 "uuid": "f3456481-a1c7-410a-b8b9-cbdef6172a7a", 00:19:11.262 "assigned_rate_limits": { 00:19:11.262 "rw_ios_per_sec": 0, 00:19:11.262 "rw_mbytes_per_sec": 0, 00:19:11.262 "r_mbytes_per_sec": 0, 00:19:11.262 "w_mbytes_per_sec": 0 00:19:11.262 }, 00:19:11.262 "claimed": true, 00:19:11.262 "claim_type": "exclusive_write", 00:19:11.262 "zoned": false, 00:19:11.262 "supported_io_types": { 00:19:11.262 "read": true, 00:19:11.262 "write": true, 00:19:11.262 "unmap": true, 00:19:11.262 "write_zeroes": true, 00:19:11.262 "flush": true, 00:19:11.262 "reset": true, 00:19:11.262 "compare": false, 00:19:11.262 "compare_and_write": false, 00:19:11.262 "abort": true, 00:19:11.262 "nvme_admin": false, 00:19:11.262 "nvme_io": false 00:19:11.262 }, 00:19:11.262 "memory_domains": [ 00:19:11.262 { 00:19:11.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:11.262 "dma_device_type": 2 00:19:11.262 } 00:19:11.262 ], 00:19:11.262 "driver_specific": {} 00:19:11.262 } 00:19:11.262 ] 00:19:11.262 07:19:44 -- common/autotest_common.sh@893 -- # return 0 00:19:11.262 07:19:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:19:11.262 07:19:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:19:11.262 07:19:44 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:19:11.262 07:19:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:11.262 07:19:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:11.262 07:19:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:11.262 07:19:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:11.262 07:19:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:11.262 07:19:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:11.262 07:19:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:11.262 07:19:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:11.262 07:19:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:11.262 07:19:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:11.262 07:19:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.520 07:19:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:11.520 "name": "Existed_Raid", 00:19:11.520 "uuid": "ec977e74-792b-4620-90fb-9702e23dff60", 00:19:11.520 "strip_size_kb": 0, 00:19:11.520 "state": "online", 00:19:11.521 "raid_level": "raid1", 00:19:11.521 "superblock": true, 00:19:11.521 "num_base_bdevs": 4, 00:19:11.521 "num_base_bdevs_discovered": 4, 00:19:11.521 "num_base_bdevs_operational": 4, 00:19:11.521 "base_bdevs_list": [ 00:19:11.521 { 00:19:11.521 "name": "BaseBdev1", 00:19:11.521 "uuid": "d5656366-7c30-48a2-a3b5-aefe6a9b247f", 00:19:11.521 "is_configured": true, 00:19:11.521 "data_offset": 2048, 00:19:11.521 "data_size": 63488 00:19:11.521 }, 00:19:11.521 { 00:19:11.521 "name": "BaseBdev2", 00:19:11.521 "uuid": "b9423e3b-567e-46b7-a458-94e58b8cd2bb", 00:19:11.521 "is_configured": true, 00:19:11.521 "data_offset": 2048, 00:19:11.521 "data_size": 63488 00:19:11.521 }, 00:19:11.521 { 00:19:11.521 "name": "BaseBdev3", 00:19:11.521 "uuid": "ae377479-c2f6-4dc3-b4f3-178465d04153", 00:19:11.521 "is_configured": true, 00:19:11.521 "data_offset": 2048, 00:19:11.521 "data_size": 63488 00:19:11.521 }, 00:19:11.521 { 00:19:11.521 "name": "BaseBdev4", 00:19:11.521 "uuid": "f3456481-a1c7-410a-b8b9-cbdef6172a7a", 00:19:11.521 "is_configured": true, 00:19:11.521 "data_offset": 2048, 00:19:11.521 "data_size": 63488 00:19:11.521 } 00:19:11.521 ] 00:19:11.521 }' 00:19:11.521 07:19:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:11.521 07:19:45 -- common/autotest_common.sh@10 -- # set +x 00:19:12.455 07:19:45 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:19:12.455 [2024-02-13 07:19:46.050614] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:12.455 07:19:46 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:19:12.455 07:19:46 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:19:12.455 07:19:46 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:12.455 07:19:46 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:12.455 07:19:46 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:19:12.455 07:19:46 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:19:12.713 07:19:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:19:12.714 07:19:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:12.714 07:19:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:12.714 07:19:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:12.714 07:19:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:12.714 07:19:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:12.714 07:19:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:12.714 07:19:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:12.714 07:19:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:12.714 07:19:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:12.714 07:19:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.972 07:19:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:12.972 "name": "Existed_Raid", 00:19:12.972 "uuid": "ec977e74-792b-4620-90fb-9702e23dff60", 00:19:12.972 "strip_size_kb": 0, 00:19:12.972 "state": "online", 00:19:12.972 "raid_level": "raid1", 00:19:12.972 "superblock": true, 00:19:12.972 "num_base_bdevs": 4, 00:19:12.972 "num_base_bdevs_discovered": 3, 00:19:12.972 "num_base_bdevs_operational": 3, 00:19:12.972 "base_bdevs_list": [ 00:19:12.972 { 00:19:12.972 "name": null, 00:19:12.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.972 "is_configured": false, 00:19:12.972 "data_offset": 2048, 00:19:12.972 "data_size": 63488 00:19:12.972 }, 00:19:12.972 { 00:19:12.972 "name": "BaseBdev2", 00:19:12.972 "uuid": "b9423e3b-567e-46b7-a458-94e58b8cd2bb", 00:19:12.972 "is_configured": true, 00:19:12.972 "data_offset": 2048, 00:19:12.972 "data_size": 63488 00:19:12.972 }, 00:19:12.972 { 00:19:12.972 "name": "BaseBdev3", 00:19:12.972 "uuid": "ae377479-c2f6-4dc3-b4f3-178465d04153", 00:19:12.972 "is_configured": true, 00:19:12.972 "data_offset": 2048, 00:19:12.972 "data_size": 63488 00:19:12.972 }, 00:19:12.972 { 00:19:12.972 "name": "BaseBdev4", 00:19:12.972 "uuid": "f3456481-a1c7-410a-b8b9-cbdef6172a7a", 00:19:12.972 "is_configured": true, 00:19:12.972 "data_offset": 2048, 00:19:12.972 "data_size": 63488 00:19:12.972 } 00:19:12.972 ] 00:19:12.972 }' 00:19:12.972 07:19:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:12.972 07:19:46 -- common/autotest_common.sh@10 -- # set +x 00:19:13.538 07:19:47 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:19:13.538 07:19:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:13.539 07:19:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:13.539 07:19:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:13.796 07:19:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:13.796 07:19:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:13.796 07:19:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:19:14.054 [2024-02-13 07:19:47.633197] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:14.054 07:19:47 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:14.054 07:19:47 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:14.054 07:19:47 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.054 07:19:47 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:14.311 07:19:47 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:14.311 07:19:47 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:14.311 07:19:47 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:19:14.569 [2024-02-13 07:19:48.199388] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:19:14.826 07:19:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:14.826 07:19:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:14.826 07:19:48 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:14.826 07:19:48 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:19:14.826 07:19:48 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:19:14.826 07:19:48 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:14.826 07:19:48 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:19:15.391 [2024-02-13 07:19:48.791576] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:19:15.391 [2024-02-13 07:19:48.791627] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:15.391 [2024-02-13 07:19:48.791706] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:15.391 [2024-02-13 07:19:48.878520] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:15.391 [2024-02-13 07:19:48.878580] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:19:15.391 07:19:48 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:19:15.391 07:19:48 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:19:15.391 07:19:48 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:15.391 07:19:48 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:19:15.649 07:19:49 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:19:15.649 07:19:49 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:19:15.649 07:19:49 -- bdev/bdev_raid.sh@287 -- # killprocess 126035 00:19:15.649 07:19:49 -- common/autotest_common.sh@924 -- # '[' -z 126035 ']' 00:19:15.649 07:19:49 -- common/autotest_common.sh@928 -- # kill -0 126035 00:19:15.649 07:19:49 -- common/autotest_common.sh@929 -- # uname 00:19:15.649 07:19:49 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:15.649 07:19:49 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 126035 00:19:15.649 killing process with pid 126035 00:19:15.649 07:19:49 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:19:15.649 07:19:49 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:19:15.649 07:19:49 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 126035' 00:19:15.649 07:19:49 -- common/autotest_common.sh@943 -- # kill 126035 00:19:15.649 07:19:49 -- common/autotest_common.sh@948 -- # wait 126035 00:19:15.649 [2024-02-13 07:19:49.157020] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:15.649 [2024-02-13 07:19:49.157489] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:17.022 ************************************ 00:19:17.022 END TEST raid_state_function_test_sb 00:19:17.022 ************************************ 00:19:17.022 07:19:50 -- bdev/bdev_raid.sh@289 -- # return 0 00:19:17.022 00:19:17.022 real 0m16.347s 00:19:17.022 user 0m29.120s 00:19:17.022 sys 0m1.999s 00:19:17.022 07:19:50 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:17.022 07:19:50 -- common/autotest_common.sh@10 -- # set +x 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:19:17.023 07:19:50 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:19:17.023 07:19:50 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:19:17.023 07:19:50 -- common/autotest_common.sh@10 -- # set +x 00:19:17.023 ************************************ 00:19:17.023 START TEST raid_superblock_test 00:19:17.023 ************************************ 00:19:17.023 07:19:50 -- common/autotest_common.sh@1102 -- # raid_superblock_test raid1 4 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@357 -- # raid_pid=126539 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@358 -- # waitforlisten 126539 /var/tmp/spdk-raid.sock 00:19:17.023 07:19:50 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:19:17.023 07:19:50 -- common/autotest_common.sh@817 -- # '[' -z 126539 ']' 00:19:17.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:17.023 07:19:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:17.023 07:19:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:17.023 07:19:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:17.023 07:19:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:17.023 07:19:50 -- common/autotest_common.sh@10 -- # set +x 00:19:17.023 [2024-02-13 07:19:50.470706] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:19:17.023 [2024-02-13 07:19:50.471641] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126539 ] 00:19:17.023 [2024-02-13 07:19:50.632519] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.281 [2024-02-13 07:19:50.871923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.539 [2024-02-13 07:19:51.069366] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.797 07:19:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:17.797 07:19:51 -- common/autotest_common.sh@850 -- # return 0 00:19:17.797 07:19:51 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:19:17.797 07:19:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:17.797 07:19:51 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:19:17.797 07:19:51 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:19:17.797 07:19:51 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:17.797 07:19:51 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:17.797 07:19:51 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:17.797 07:19:51 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:17.797 07:19:51 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:19:18.056 malloc1 00:19:18.056 07:19:51 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:18.314 [2024-02-13 07:19:51.909772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:18.314 [2024-02-13 07:19:51.909900] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.314 [2024-02-13 07:19:51.909939] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:19:18.314 [2024-02-13 07:19:51.909992] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.314 [2024-02-13 07:19:51.912391] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.314 [2024-02-13 07:19:51.912450] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:18.314 pt1 00:19:18.314 07:19:51 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:18.314 07:19:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:18.314 07:19:51 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:19:18.314 07:19:51 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:19:18.314 07:19:51 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:18.314 07:19:51 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:18.314 07:19:51 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:18.314 07:19:51 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:18.314 07:19:51 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:19:18.572 malloc2 00:19:18.573 07:19:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:18.831 [2024-02-13 07:19:52.452752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:18.831 [2024-02-13 07:19:52.452873] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.831 [2024-02-13 07:19:52.452924] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:19:18.831 [2024-02-13 07:19:52.452987] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.831 [2024-02-13 07:19:52.455293] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.831 [2024-02-13 07:19:52.455364] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:18.831 pt2 00:19:18.831 07:19:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:18.831 07:19:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:18.831 07:19:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:19:18.831 07:19:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:19:18.831 07:19:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:19:18.831 07:19:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:18.831 07:19:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:18.831 07:19:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:18.831 07:19:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:19:19.153 malloc3 00:19:19.153 07:19:52 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:19.411 [2024-02-13 07:19:52.960609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:19.411 [2024-02-13 07:19:52.960734] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.411 [2024-02-13 07:19:52.960782] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:19:19.411 [2024-02-13 07:19:52.960828] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.411 [2024-02-13 07:19:52.963199] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.411 [2024-02-13 07:19:52.963274] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:19.411 pt3 00:19:19.411 07:19:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:19.411 07:19:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:19.411 07:19:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:19:19.411 07:19:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:19:19.411 07:19:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:19:19.411 07:19:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:19.411 07:19:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:19:19.411 07:19:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:19.411 07:19:52 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:19:19.670 malloc4 00:19:19.670 07:19:53 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:19.928 [2024-02-13 07:19:53.418941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:19.928 [2024-02-13 07:19:53.419069] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.928 [2024-02-13 07:19:53.419153] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:19.928 [2024-02-13 07:19:53.419225] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.928 [2024-02-13 07:19:53.421683] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.928 [2024-02-13 07:19:53.421739] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:19.928 pt4 00:19:19.928 07:19:53 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:19:19.928 07:19:53 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:19:19.928 07:19:53 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:19:20.186 [2024-02-13 07:19:53.627019] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:20.186 [2024-02-13 07:19:53.628980] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:20.187 [2024-02-13 07:19:53.629079] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:20.187 [2024-02-13 07:19:53.629178] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:20.187 [2024-02-13 07:19:53.629436] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:19:20.187 [2024-02-13 07:19:53.629453] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:20.187 [2024-02-13 07:19:53.629626] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:19:20.187 [2024-02-13 07:19:53.630018] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:19:20.187 [2024-02-13 07:19:53.630045] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:19:20.187 [2024-02-13 07:19:53.630218] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.187 07:19:53 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:20.187 07:19:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:20.187 07:19:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:20.187 07:19:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:20.187 07:19:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:20.187 07:19:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:20.187 07:19:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:20.187 07:19:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:20.187 07:19:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:20.187 07:19:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:20.187 07:19:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:20.187 07:19:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.445 07:19:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:20.445 "name": "raid_bdev1", 00:19:20.445 "uuid": "1e596a99-dd00-4c83-a217-2b4aa094de16", 00:19:20.445 "strip_size_kb": 0, 00:19:20.445 "state": "online", 00:19:20.445 "raid_level": "raid1", 00:19:20.445 "superblock": true, 00:19:20.445 "num_base_bdevs": 4, 00:19:20.445 "num_base_bdevs_discovered": 4, 00:19:20.445 "num_base_bdevs_operational": 4, 00:19:20.445 "base_bdevs_list": [ 00:19:20.445 { 00:19:20.445 "name": "pt1", 00:19:20.445 "uuid": "71fed804-e998-539a-bd46-fce0a20fefd4", 00:19:20.445 "is_configured": true, 00:19:20.445 "data_offset": 2048, 00:19:20.445 "data_size": 63488 00:19:20.445 }, 00:19:20.445 { 00:19:20.445 "name": "pt2", 00:19:20.445 "uuid": "9d1376a0-3c0f-5e63-b798-623121ede69b", 00:19:20.445 "is_configured": true, 00:19:20.445 "data_offset": 2048, 00:19:20.445 "data_size": 63488 00:19:20.445 }, 00:19:20.445 { 00:19:20.445 "name": "pt3", 00:19:20.445 "uuid": "1a629e1e-7bf9-5b9c-8dbb-4edf7cf5a5f7", 00:19:20.445 "is_configured": true, 00:19:20.445 "data_offset": 2048, 00:19:20.445 "data_size": 63488 00:19:20.445 }, 00:19:20.445 { 00:19:20.445 "name": "pt4", 00:19:20.445 "uuid": "a444df3c-defa-5260-84eb-874d6a665233", 00:19:20.445 "is_configured": true, 00:19:20.445 "data_offset": 2048, 00:19:20.445 "data_size": 63488 00:19:20.445 } 00:19:20.445 ] 00:19:20.445 }' 00:19:20.445 07:19:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:20.445 07:19:53 -- common/autotest_common.sh@10 -- # set +x 00:19:21.012 07:19:54 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:21.012 07:19:54 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:19:21.270 [2024-02-13 07:19:54.779650] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:21.270 07:19:54 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1e596a99-dd00-4c83-a217-2b4aa094de16 00:19:21.270 07:19:54 -- bdev/bdev_raid.sh@380 -- # '[' -z 1e596a99-dd00-4c83-a217-2b4aa094de16 ']' 00:19:21.270 07:19:54 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:21.529 [2024-02-13 07:19:55.035383] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:21.530 [2024-02-13 07:19:55.035417] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:21.530 [2024-02-13 07:19:55.035543] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:21.530 [2024-02-13 07:19:55.035678] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:21.530 [2024-02-13 07:19:55.035691] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:19:21.530 07:19:55 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:21.530 07:19:55 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:19:21.788 07:19:55 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:19:21.788 07:19:55 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:19:21.788 07:19:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:21.788 07:19:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:21.788 07:19:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:21.788 07:19:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:22.046 07:19:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:22.046 07:19:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:22.305 07:19:55 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:19:22.305 07:19:55 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:22.565 07:19:56 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:19:22.565 07:19:56 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:22.823 07:19:56 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:19:22.823 07:19:56 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:22.823 07:19:56 -- common/autotest_common.sh@638 -- # local es=0 00:19:22.823 07:19:56 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:22.823 07:19:56 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:22.824 07:19:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:22.824 07:19:56 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:22.824 07:19:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:22.824 07:19:56 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:22.824 07:19:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:22.824 07:19:56 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:22.824 07:19:56 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:22.824 07:19:56 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:19:23.082 [2024-02-13 07:19:56.575718] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:23.082 [2024-02-13 07:19:56.577547] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:23.082 [2024-02-13 07:19:56.577620] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:19:23.082 [2024-02-13 07:19:56.577661] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:19:23.082 [2024-02-13 07:19:56.577730] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:19:23.082 [2024-02-13 07:19:56.577830] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:19:23.082 [2024-02-13 07:19:56.577864] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:19:23.082 [2024-02-13 07:19:56.577936] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:19:23.082 [2024-02-13 07:19:56.577961] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:23.082 [2024-02-13 07:19:56.577971] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:19:23.082 request: 00:19:23.082 { 00:19:23.082 "name": "raid_bdev1", 00:19:23.082 "raid_level": "raid1", 00:19:23.082 "base_bdevs": [ 00:19:23.083 "malloc1", 00:19:23.083 "malloc2", 00:19:23.083 "malloc3", 00:19:23.083 "malloc4" 00:19:23.083 ], 00:19:23.083 "superblock": false, 00:19:23.083 "method": "bdev_raid_create", 00:19:23.083 "req_id": 1 00:19:23.083 } 00:19:23.083 Got JSON-RPC error response 00:19:23.083 response: 00:19:23.083 { 00:19:23.083 "code": -17, 00:19:23.083 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:23.083 } 00:19:23.083 07:19:56 -- common/autotest_common.sh@641 -- # es=1 00:19:23.083 07:19:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:23.083 07:19:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:23.083 07:19:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:23.083 07:19:56 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.083 07:19:56 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:19:23.341 07:19:56 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:19:23.341 07:19:56 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:19:23.341 07:19:56 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:23.601 [2024-02-13 07:19:57.051759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:23.601 [2024-02-13 07:19:57.051865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.601 [2024-02-13 07:19:57.051897] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:23.601 [2024-02-13 07:19:57.051925] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.601 [2024-02-13 07:19:57.054504] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.601 [2024-02-13 07:19:57.054588] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:23.601 [2024-02-13 07:19:57.054693] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:23.601 [2024-02-13 07:19:57.054744] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:23.601 pt1 00:19:23.601 07:19:57 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:23.601 07:19:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:23.601 07:19:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:23.601 07:19:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:23.601 07:19:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:23.601 07:19:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:23.601 07:19:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:23.601 07:19:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:23.601 07:19:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:23.601 07:19:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:23.601 07:19:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:23.601 07:19:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.601 07:19:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:23.601 "name": "raid_bdev1", 00:19:23.601 "uuid": "1e596a99-dd00-4c83-a217-2b4aa094de16", 00:19:23.601 "strip_size_kb": 0, 00:19:23.601 "state": "configuring", 00:19:23.601 "raid_level": "raid1", 00:19:23.601 "superblock": true, 00:19:23.601 "num_base_bdevs": 4, 00:19:23.601 "num_base_bdevs_discovered": 1, 00:19:23.601 "num_base_bdevs_operational": 4, 00:19:23.601 "base_bdevs_list": [ 00:19:23.601 { 00:19:23.601 "name": "pt1", 00:19:23.601 "uuid": "71fed804-e998-539a-bd46-fce0a20fefd4", 00:19:23.601 "is_configured": true, 00:19:23.601 "data_offset": 2048, 00:19:23.601 "data_size": 63488 00:19:23.601 }, 00:19:23.601 { 00:19:23.601 "name": null, 00:19:23.601 "uuid": "9d1376a0-3c0f-5e63-b798-623121ede69b", 00:19:23.601 "is_configured": false, 00:19:23.601 "data_offset": 2048, 00:19:23.601 "data_size": 63488 00:19:23.601 }, 00:19:23.601 { 00:19:23.601 "name": null, 00:19:23.601 "uuid": "1a629e1e-7bf9-5b9c-8dbb-4edf7cf5a5f7", 00:19:23.601 "is_configured": false, 00:19:23.601 "data_offset": 2048, 00:19:23.601 "data_size": 63488 00:19:23.601 }, 00:19:23.601 { 00:19:23.601 "name": null, 00:19:23.601 "uuid": "a444df3c-defa-5260-84eb-874d6a665233", 00:19:23.601 "is_configured": false, 00:19:23.601 "data_offset": 2048, 00:19:23.601 "data_size": 63488 00:19:23.601 } 00:19:23.601 ] 00:19:23.601 }' 00:19:23.601 07:19:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:23.601 07:19:57 -- common/autotest_common.sh@10 -- # set +x 00:19:24.538 07:19:57 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:19:24.538 07:19:57 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:24.538 [2024-02-13 07:19:58.156065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:24.538 [2024-02-13 07:19:58.156164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:24.538 [2024-02-13 07:19:58.156208] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:24.538 [2024-02-13 07:19:58.156233] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:24.538 [2024-02-13 07:19:58.156724] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:24.538 [2024-02-13 07:19:58.156786] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:24.538 [2024-02-13 07:19:58.156917] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:24.538 [2024-02-13 07:19:58.156973] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:24.538 pt2 00:19:24.538 07:19:58 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:24.797 [2024-02-13 07:19:58.388097] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:19:24.797 07:19:58 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:24.797 07:19:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:24.797 07:19:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:24.797 07:19:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:24.797 07:19:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:24.797 07:19:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:24.797 07:19:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:24.797 07:19:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:24.797 07:19:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:24.797 07:19:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:24.797 07:19:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:24.797 07:19:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.056 07:19:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:25.056 "name": "raid_bdev1", 00:19:25.056 "uuid": "1e596a99-dd00-4c83-a217-2b4aa094de16", 00:19:25.056 "strip_size_kb": 0, 00:19:25.056 "state": "configuring", 00:19:25.056 "raid_level": "raid1", 00:19:25.056 "superblock": true, 00:19:25.056 "num_base_bdevs": 4, 00:19:25.056 "num_base_bdevs_discovered": 1, 00:19:25.056 "num_base_bdevs_operational": 4, 00:19:25.056 "base_bdevs_list": [ 00:19:25.056 { 00:19:25.056 "name": "pt1", 00:19:25.056 "uuid": "71fed804-e998-539a-bd46-fce0a20fefd4", 00:19:25.056 "is_configured": true, 00:19:25.056 "data_offset": 2048, 00:19:25.056 "data_size": 63488 00:19:25.056 }, 00:19:25.056 { 00:19:25.056 "name": null, 00:19:25.056 "uuid": "9d1376a0-3c0f-5e63-b798-623121ede69b", 00:19:25.056 "is_configured": false, 00:19:25.056 "data_offset": 2048, 00:19:25.056 "data_size": 63488 00:19:25.056 }, 00:19:25.056 { 00:19:25.056 "name": null, 00:19:25.056 "uuid": "1a629e1e-7bf9-5b9c-8dbb-4edf7cf5a5f7", 00:19:25.056 "is_configured": false, 00:19:25.056 "data_offset": 2048, 00:19:25.056 "data_size": 63488 00:19:25.056 }, 00:19:25.056 { 00:19:25.056 "name": null, 00:19:25.056 "uuid": "a444df3c-defa-5260-84eb-874d6a665233", 00:19:25.056 "is_configured": false, 00:19:25.056 "data_offset": 2048, 00:19:25.056 "data_size": 63488 00:19:25.056 } 00:19:25.056 ] 00:19:25.056 }' 00:19:25.056 07:19:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:25.056 07:19:58 -- common/autotest_common.sh@10 -- # set +x 00:19:25.992 07:19:59 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:19:25.992 07:19:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:25.992 07:19:59 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:25.992 [2024-02-13 07:19:59.576391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:25.992 [2024-02-13 07:19:59.576496] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:25.992 [2024-02-13 07:19:59.576589] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:25.992 [2024-02-13 07:19:59.576643] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:25.992 [2024-02-13 07:19:59.577236] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:25.992 [2024-02-13 07:19:59.577312] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:25.992 [2024-02-13 07:19:59.577477] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:25.992 [2024-02-13 07:19:59.577539] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:25.992 pt2 00:19:25.992 07:19:59 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:25.992 07:19:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:25.992 07:19:59 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:26.250 [2024-02-13 07:19:59.840452] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:26.250 [2024-02-13 07:19:59.840543] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.250 [2024-02-13 07:19:59.840632] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:26.250 [2024-02-13 07:19:59.840692] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.250 [2024-02-13 07:19:59.841174] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.250 [2024-02-13 07:19:59.841291] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:26.250 [2024-02-13 07:19:59.841450] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:26.250 [2024-02-13 07:19:59.841503] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:26.250 pt3 00:19:26.250 07:19:59 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:26.250 07:19:59 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:26.250 07:19:59 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:26.509 [2024-02-13 07:20:00.052489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:26.509 [2024-02-13 07:20:00.052580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.509 [2024-02-13 07:20:00.052641] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:19:26.509 [2024-02-13 07:20:00.052718] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.509 [2024-02-13 07:20:00.053262] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.509 [2024-02-13 07:20:00.053355] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:26.509 [2024-02-13 07:20:00.053504] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:26.509 [2024-02-13 07:20:00.053556] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:26.509 [2024-02-13 07:20:00.053745] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:19:26.509 [2024-02-13 07:20:00.053770] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:26.509 [2024-02-13 07:20:00.053972] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:26.509 [2024-02-13 07:20:00.054427] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:19:26.509 [2024-02-13 07:20:00.054468] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:19:26.509 [2024-02-13 07:20:00.054669] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.509 pt4 00:19:26.509 07:20:00 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:19:26.509 07:20:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:19:26.509 07:20:00 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:19:26.509 07:20:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:26.509 07:20:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:26.509 07:20:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:26.509 07:20:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:26.509 07:20:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:26.509 07:20:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:26.509 07:20:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:26.509 07:20:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:26.509 07:20:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:26.510 07:20:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:26.510 07:20:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.768 07:20:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:26.768 "name": "raid_bdev1", 00:19:26.768 "uuid": "1e596a99-dd00-4c83-a217-2b4aa094de16", 00:19:26.768 "strip_size_kb": 0, 00:19:26.768 "state": "online", 00:19:26.768 "raid_level": "raid1", 00:19:26.768 "superblock": true, 00:19:26.768 "num_base_bdevs": 4, 00:19:26.768 "num_base_bdevs_discovered": 4, 00:19:26.768 "num_base_bdevs_operational": 4, 00:19:26.768 "base_bdevs_list": [ 00:19:26.768 { 00:19:26.768 "name": "pt1", 00:19:26.768 "uuid": "71fed804-e998-539a-bd46-fce0a20fefd4", 00:19:26.768 "is_configured": true, 00:19:26.768 "data_offset": 2048, 00:19:26.768 "data_size": 63488 00:19:26.768 }, 00:19:26.768 { 00:19:26.768 "name": "pt2", 00:19:26.768 "uuid": "9d1376a0-3c0f-5e63-b798-623121ede69b", 00:19:26.768 "is_configured": true, 00:19:26.768 "data_offset": 2048, 00:19:26.768 "data_size": 63488 00:19:26.768 }, 00:19:26.768 { 00:19:26.768 "name": "pt3", 00:19:26.768 "uuid": "1a629e1e-7bf9-5b9c-8dbb-4edf7cf5a5f7", 00:19:26.768 "is_configured": true, 00:19:26.768 "data_offset": 2048, 00:19:26.768 "data_size": 63488 00:19:26.768 }, 00:19:26.768 { 00:19:26.768 "name": "pt4", 00:19:26.768 "uuid": "a444df3c-defa-5260-84eb-874d6a665233", 00:19:26.768 "is_configured": true, 00:19:26.768 "data_offset": 2048, 00:19:26.768 "data_size": 63488 00:19:26.768 } 00:19:26.768 ] 00:19:26.768 }' 00:19:26.768 07:20:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:26.768 07:20:00 -- common/autotest_common.sh@10 -- # set +x 00:19:27.705 07:20:01 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:27.705 07:20:01 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:19:27.705 [2024-02-13 07:20:01.261096] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:27.705 07:20:01 -- bdev/bdev_raid.sh@430 -- # '[' 1e596a99-dd00-4c83-a217-2b4aa094de16 '!=' 1e596a99-dd00-4c83-a217-2b4aa094de16 ']' 00:19:27.705 07:20:01 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:19:27.705 07:20:01 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:19:27.705 07:20:01 -- bdev/bdev_raid.sh@196 -- # return 0 00:19:27.705 07:20:01 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:19:27.964 [2024-02-13 07:20:01.461013] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:27.964 07:20:01 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:27.964 07:20:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:27.964 07:20:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:27.964 07:20:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:27.964 07:20:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:27.964 07:20:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:27.964 07:20:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:27.964 07:20:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:27.964 07:20:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:27.964 07:20:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:27.964 07:20:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:27.964 07:20:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.223 07:20:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:28.223 "name": "raid_bdev1", 00:19:28.223 "uuid": "1e596a99-dd00-4c83-a217-2b4aa094de16", 00:19:28.223 "strip_size_kb": 0, 00:19:28.223 "state": "online", 00:19:28.223 "raid_level": "raid1", 00:19:28.223 "superblock": true, 00:19:28.223 "num_base_bdevs": 4, 00:19:28.223 "num_base_bdevs_discovered": 3, 00:19:28.223 "num_base_bdevs_operational": 3, 00:19:28.223 "base_bdevs_list": [ 00:19:28.223 { 00:19:28.223 "name": null, 00:19:28.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.223 "is_configured": false, 00:19:28.223 "data_offset": 2048, 00:19:28.223 "data_size": 63488 00:19:28.223 }, 00:19:28.223 { 00:19:28.223 "name": "pt2", 00:19:28.223 "uuid": "9d1376a0-3c0f-5e63-b798-623121ede69b", 00:19:28.223 "is_configured": true, 00:19:28.223 "data_offset": 2048, 00:19:28.223 "data_size": 63488 00:19:28.223 }, 00:19:28.223 { 00:19:28.223 "name": "pt3", 00:19:28.223 "uuid": "1a629e1e-7bf9-5b9c-8dbb-4edf7cf5a5f7", 00:19:28.223 "is_configured": true, 00:19:28.223 "data_offset": 2048, 00:19:28.223 "data_size": 63488 00:19:28.223 }, 00:19:28.223 { 00:19:28.223 "name": "pt4", 00:19:28.223 "uuid": "a444df3c-defa-5260-84eb-874d6a665233", 00:19:28.223 "is_configured": true, 00:19:28.223 "data_offset": 2048, 00:19:28.223 "data_size": 63488 00:19:28.223 } 00:19:28.223 ] 00:19:28.223 }' 00:19:28.223 07:20:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:28.223 07:20:01 -- common/autotest_common.sh@10 -- # set +x 00:19:28.790 07:20:02 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:29.060 [2024-02-13 07:20:02.664503] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:29.060 [2024-02-13 07:20:02.664540] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:29.060 [2024-02-13 07:20:02.664639] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:29.060 [2024-02-13 07:20:02.664722] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:29.060 [2024-02-13 07:20:02.664735] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:19:29.060 07:20:02 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:29.060 07:20:02 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:19:29.333 07:20:02 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:19:29.333 07:20:02 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:19:29.333 07:20:02 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:19:29.333 07:20:02 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:29.333 07:20:02 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:29.592 07:20:03 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:29.592 07:20:03 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:29.592 07:20:03 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:29.850 07:20:03 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:29.850 07:20:03 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:29.850 07:20:03 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:30.109 07:20:03 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:19:30.109 07:20:03 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:19:30.109 07:20:03 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:19:30.109 07:20:03 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:30.109 07:20:03 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:30.109 [2024-02-13 07:20:03.772707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:30.109 [2024-02-13 07:20:03.772849] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.109 [2024-02-13 07:20:03.772904] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:19:30.109 [2024-02-13 07:20:03.772934] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.109 [2024-02-13 07:20:03.775553] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.109 [2024-02-13 07:20:03.775665] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:30.109 [2024-02-13 07:20:03.775794] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:30.109 [2024-02-13 07:20:03.775871] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:30.109 pt2 00:19:30.109 07:20:03 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:30.109 07:20:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:30.109 07:20:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:30.110 07:20:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:30.110 07:20:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:30.110 07:20:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:30.110 07:20:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:30.110 07:20:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:30.110 07:20:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:30.110 07:20:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:30.110 07:20:03 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:30.110 07:20:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.368 07:20:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:30.368 "name": "raid_bdev1", 00:19:30.368 "uuid": "1e596a99-dd00-4c83-a217-2b4aa094de16", 00:19:30.368 "strip_size_kb": 0, 00:19:30.368 "state": "configuring", 00:19:30.368 "raid_level": "raid1", 00:19:30.368 "superblock": true, 00:19:30.368 "num_base_bdevs": 4, 00:19:30.368 "num_base_bdevs_discovered": 1, 00:19:30.368 "num_base_bdevs_operational": 3, 00:19:30.368 "base_bdevs_list": [ 00:19:30.368 { 00:19:30.368 "name": null, 00:19:30.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.368 "is_configured": false, 00:19:30.368 "data_offset": 2048, 00:19:30.368 "data_size": 63488 00:19:30.368 }, 00:19:30.368 { 00:19:30.368 "name": "pt2", 00:19:30.368 "uuid": "9d1376a0-3c0f-5e63-b798-623121ede69b", 00:19:30.368 "is_configured": true, 00:19:30.368 "data_offset": 2048, 00:19:30.368 "data_size": 63488 00:19:30.368 }, 00:19:30.368 { 00:19:30.368 "name": null, 00:19:30.368 "uuid": "1a629e1e-7bf9-5b9c-8dbb-4edf7cf5a5f7", 00:19:30.368 "is_configured": false, 00:19:30.368 "data_offset": 2048, 00:19:30.368 "data_size": 63488 00:19:30.368 }, 00:19:30.368 { 00:19:30.368 "name": null, 00:19:30.368 "uuid": "a444df3c-defa-5260-84eb-874d6a665233", 00:19:30.368 "is_configured": false, 00:19:30.368 "data_offset": 2048, 00:19:30.368 "data_size": 63488 00:19:30.368 } 00:19:30.368 ] 00:19:30.368 }' 00:19:30.368 07:20:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:30.368 07:20:04 -- common/autotest_common.sh@10 -- # set +x 00:19:31.303 07:20:04 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:19:31.303 07:20:04 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:31.303 07:20:04 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:31.303 [2024-02-13 07:20:04.889005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:31.303 [2024-02-13 07:20:04.889128] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:31.303 [2024-02-13 07:20:04.889181] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:19:31.303 [2024-02-13 07:20:04.889230] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:31.303 [2024-02-13 07:20:04.889766] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:31.303 [2024-02-13 07:20:04.889816] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:31.303 [2024-02-13 07:20:04.889943] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:31.303 [2024-02-13 07:20:04.889980] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:31.303 pt3 00:19:31.303 07:20:04 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:31.303 07:20:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:31.303 07:20:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:31.303 07:20:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:31.303 07:20:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:31.303 07:20:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:31.303 07:20:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:31.303 07:20:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:31.303 07:20:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:31.303 07:20:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:31.303 07:20:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:31.303 07:20:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.561 07:20:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:31.561 "name": "raid_bdev1", 00:19:31.561 "uuid": "1e596a99-dd00-4c83-a217-2b4aa094de16", 00:19:31.561 "strip_size_kb": 0, 00:19:31.561 "state": "configuring", 00:19:31.561 "raid_level": "raid1", 00:19:31.561 "superblock": true, 00:19:31.561 "num_base_bdevs": 4, 00:19:31.561 "num_base_bdevs_discovered": 2, 00:19:31.561 "num_base_bdevs_operational": 3, 00:19:31.561 "base_bdevs_list": [ 00:19:31.561 { 00:19:31.561 "name": null, 00:19:31.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.561 "is_configured": false, 00:19:31.561 "data_offset": 2048, 00:19:31.561 "data_size": 63488 00:19:31.561 }, 00:19:31.561 { 00:19:31.561 "name": "pt2", 00:19:31.561 "uuid": "9d1376a0-3c0f-5e63-b798-623121ede69b", 00:19:31.561 "is_configured": true, 00:19:31.561 "data_offset": 2048, 00:19:31.561 "data_size": 63488 00:19:31.561 }, 00:19:31.561 { 00:19:31.561 "name": "pt3", 00:19:31.561 "uuid": "1a629e1e-7bf9-5b9c-8dbb-4edf7cf5a5f7", 00:19:31.561 "is_configured": true, 00:19:31.561 "data_offset": 2048, 00:19:31.561 "data_size": 63488 00:19:31.561 }, 00:19:31.562 { 00:19:31.562 "name": null, 00:19:31.562 "uuid": "a444df3c-defa-5260-84eb-874d6a665233", 00:19:31.562 "is_configured": false, 00:19:31.562 "data_offset": 2048, 00:19:31.562 "data_size": 63488 00:19:31.562 } 00:19:31.562 ] 00:19:31.562 }' 00:19:31.562 07:20:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:31.562 07:20:05 -- common/autotest_common.sh@10 -- # set +x 00:19:32.496 07:20:05 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:19:32.496 07:20:05 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:19:32.496 07:20:05 -- bdev/bdev_raid.sh@462 -- # i=3 00:19:32.496 07:20:05 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:32.496 [2024-02-13 07:20:06.085523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:32.496 [2024-02-13 07:20:06.085658] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:32.496 [2024-02-13 07:20:06.085710] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:32.496 [2024-02-13 07:20:06.085736] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:32.496 [2024-02-13 07:20:06.086442] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:32.496 [2024-02-13 07:20:06.086506] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:32.496 [2024-02-13 07:20:06.086650] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:32.496 [2024-02-13 07:20:06.086685] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:32.496 [2024-02-13 07:20:06.086888] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:19:32.496 [2024-02-13 07:20:06.086928] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:32.496 [2024-02-13 07:20:06.087089] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:32.496 [2024-02-13 07:20:06.087536] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:19:32.496 [2024-02-13 07:20:06.087563] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:19:32.496 [2024-02-13 07:20:06.087773] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.496 pt4 00:19:32.496 07:20:06 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:32.496 07:20:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:32.496 07:20:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:32.496 07:20:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:32.496 07:20:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:32.496 07:20:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:32.496 07:20:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:32.496 07:20:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:32.496 07:20:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:32.496 07:20:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:32.496 07:20:06 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:32.496 07:20:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.753 07:20:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:32.753 "name": "raid_bdev1", 00:19:32.753 "uuid": "1e596a99-dd00-4c83-a217-2b4aa094de16", 00:19:32.753 "strip_size_kb": 0, 00:19:32.753 "state": "online", 00:19:32.753 "raid_level": "raid1", 00:19:32.753 "superblock": true, 00:19:32.753 "num_base_bdevs": 4, 00:19:32.753 "num_base_bdevs_discovered": 3, 00:19:32.753 "num_base_bdevs_operational": 3, 00:19:32.753 "base_bdevs_list": [ 00:19:32.753 { 00:19:32.753 "name": null, 00:19:32.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.753 "is_configured": false, 00:19:32.753 "data_offset": 2048, 00:19:32.753 "data_size": 63488 00:19:32.753 }, 00:19:32.753 { 00:19:32.753 "name": "pt2", 00:19:32.753 "uuid": "9d1376a0-3c0f-5e63-b798-623121ede69b", 00:19:32.753 "is_configured": true, 00:19:32.753 "data_offset": 2048, 00:19:32.753 "data_size": 63488 00:19:32.753 }, 00:19:32.753 { 00:19:32.753 "name": "pt3", 00:19:32.753 "uuid": "1a629e1e-7bf9-5b9c-8dbb-4edf7cf5a5f7", 00:19:32.753 "is_configured": true, 00:19:32.753 "data_offset": 2048, 00:19:32.753 "data_size": 63488 00:19:32.753 }, 00:19:32.753 { 00:19:32.753 "name": "pt4", 00:19:32.753 "uuid": "a444df3c-defa-5260-84eb-874d6a665233", 00:19:32.753 "is_configured": true, 00:19:32.753 "data_offset": 2048, 00:19:32.753 "data_size": 63488 00:19:32.753 } 00:19:32.753 ] 00:19:32.753 }' 00:19:32.753 07:20:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:32.753 07:20:06 -- common/autotest_common.sh@10 -- # set +x 00:19:33.686 07:20:07 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:19:33.686 07:20:07 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:33.686 [2024-02-13 07:20:07.301695] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:33.686 [2024-02-13 07:20:07.301727] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:33.686 [2024-02-13 07:20:07.301828] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:33.686 [2024-02-13 07:20:07.301914] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:33.686 [2024-02-13 07:20:07.301925] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:19:33.686 07:20:07 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:33.686 07:20:07 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:19:33.944 07:20:07 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:19:33.944 07:20:07 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:19:33.944 07:20:07 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:34.202 [2024-02-13 07:20:07.769813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:34.202 [2024-02-13 07:20:07.769930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:34.202 [2024-02-13 07:20:07.769977] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:19:34.202 [2024-02-13 07:20:07.770001] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:34.202 [2024-02-13 07:20:07.772326] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:34.202 [2024-02-13 07:20:07.772402] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:34.202 [2024-02-13 07:20:07.772509] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:19:34.202 [2024-02-13 07:20:07.772566] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:34.202 pt1 00:19:34.202 07:20:07 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:19:34.202 07:20:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:34.202 07:20:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:34.202 07:20:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:34.202 07:20:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:34.202 07:20:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:19:34.202 07:20:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:34.202 07:20:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:34.202 07:20:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:34.202 07:20:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:34.203 07:20:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:34.203 07:20:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.460 07:20:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:34.460 "name": "raid_bdev1", 00:19:34.460 "uuid": "1e596a99-dd00-4c83-a217-2b4aa094de16", 00:19:34.460 "strip_size_kb": 0, 00:19:34.460 "state": "configuring", 00:19:34.460 "raid_level": "raid1", 00:19:34.460 "superblock": true, 00:19:34.460 "num_base_bdevs": 4, 00:19:34.460 "num_base_bdevs_discovered": 1, 00:19:34.460 "num_base_bdevs_operational": 4, 00:19:34.460 "base_bdevs_list": [ 00:19:34.460 { 00:19:34.460 "name": "pt1", 00:19:34.460 "uuid": "71fed804-e998-539a-bd46-fce0a20fefd4", 00:19:34.460 "is_configured": true, 00:19:34.460 "data_offset": 2048, 00:19:34.460 "data_size": 63488 00:19:34.460 }, 00:19:34.460 { 00:19:34.460 "name": null, 00:19:34.461 "uuid": "9d1376a0-3c0f-5e63-b798-623121ede69b", 00:19:34.461 "is_configured": false, 00:19:34.461 "data_offset": 2048, 00:19:34.461 "data_size": 63488 00:19:34.461 }, 00:19:34.461 { 00:19:34.461 "name": null, 00:19:34.461 "uuid": "1a629e1e-7bf9-5b9c-8dbb-4edf7cf5a5f7", 00:19:34.461 "is_configured": false, 00:19:34.461 "data_offset": 2048, 00:19:34.461 "data_size": 63488 00:19:34.461 }, 00:19:34.461 { 00:19:34.461 "name": null, 00:19:34.461 "uuid": "a444df3c-defa-5260-84eb-874d6a665233", 00:19:34.461 "is_configured": false, 00:19:34.461 "data_offset": 2048, 00:19:34.461 "data_size": 63488 00:19:34.461 } 00:19:34.461 ] 00:19:34.461 }' 00:19:34.461 07:20:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:34.461 07:20:08 -- common/autotest_common.sh@10 -- # set +x 00:19:35.027 07:20:08 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:19:35.027 07:20:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:35.027 07:20:08 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:19:35.285 07:20:08 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:35.285 07:20:08 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:35.285 07:20:08 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:19:35.543 07:20:09 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:35.543 07:20:09 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:35.543 07:20:09 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:19:35.802 07:20:09 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:19:35.802 07:20:09 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:19:35.802 07:20:09 -- bdev/bdev_raid.sh@489 -- # i=3 00:19:35.802 07:20:09 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:19:36.059 [2024-02-13 07:20:09.614210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:19:36.059 [2024-02-13 07:20:09.614333] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:36.059 [2024-02-13 07:20:09.614370] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:19:36.059 [2024-02-13 07:20:09.614399] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:36.059 [2024-02-13 07:20:09.614962] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:36.059 [2024-02-13 07:20:09.615011] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:19:36.059 [2024-02-13 07:20:09.615135] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:19:36.059 [2024-02-13 07:20:09.615152] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:36.059 [2024-02-13 07:20:09.615160] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:36.060 [2024-02-13 07:20:09.615204] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:19:36.060 [2024-02-13 07:20:09.615313] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:19:36.060 pt4 00:19:36.060 07:20:09 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:19:36.060 07:20:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:36.060 07:20:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:19:36.060 07:20:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:36.060 07:20:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:36.060 07:20:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:36.060 07:20:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:36.060 07:20:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:36.060 07:20:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:36.060 07:20:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:36.060 07:20:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:36.060 07:20:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.318 07:20:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:36.318 "name": "raid_bdev1", 00:19:36.318 "uuid": "1e596a99-dd00-4c83-a217-2b4aa094de16", 00:19:36.318 "strip_size_kb": 0, 00:19:36.318 "state": "configuring", 00:19:36.318 "raid_level": "raid1", 00:19:36.318 "superblock": true, 00:19:36.318 "num_base_bdevs": 4, 00:19:36.318 "num_base_bdevs_discovered": 1, 00:19:36.318 "num_base_bdevs_operational": 3, 00:19:36.318 "base_bdevs_list": [ 00:19:36.318 { 00:19:36.318 "name": null, 00:19:36.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.318 "is_configured": false, 00:19:36.318 "data_offset": 2048, 00:19:36.318 "data_size": 63488 00:19:36.318 }, 00:19:36.318 { 00:19:36.318 "name": null, 00:19:36.318 "uuid": "9d1376a0-3c0f-5e63-b798-623121ede69b", 00:19:36.318 "is_configured": false, 00:19:36.318 "data_offset": 2048, 00:19:36.318 "data_size": 63488 00:19:36.318 }, 00:19:36.318 { 00:19:36.318 "name": null, 00:19:36.318 "uuid": "1a629e1e-7bf9-5b9c-8dbb-4edf7cf5a5f7", 00:19:36.318 "is_configured": false, 00:19:36.318 "data_offset": 2048, 00:19:36.318 "data_size": 63488 00:19:36.318 }, 00:19:36.318 { 00:19:36.318 "name": "pt4", 00:19:36.318 "uuid": "a444df3c-defa-5260-84eb-874d6a665233", 00:19:36.318 "is_configured": true, 00:19:36.318 "data_offset": 2048, 00:19:36.318 "data_size": 63488 00:19:36.318 } 00:19:36.318 ] 00:19:36.318 }' 00:19:36.318 07:20:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:36.318 07:20:09 -- common/autotest_common.sh@10 -- # set +x 00:19:37.253 07:20:10 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:19:37.253 07:20:10 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:37.253 07:20:10 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:37.253 [2024-02-13 07:20:10.838511] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:37.253 [2024-02-13 07:20:10.838633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.253 [2024-02-13 07:20:10.838685] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:19:37.253 [2024-02-13 07:20:10.838714] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.253 [2024-02-13 07:20:10.839275] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.253 [2024-02-13 07:20:10.839377] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:37.253 [2024-02-13 07:20:10.839481] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:19:37.253 [2024-02-13 07:20:10.839510] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:37.253 pt2 00:19:37.254 07:20:10 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:19:37.254 07:20:10 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:37.254 07:20:10 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:19:37.513 [2024-02-13 07:20:11.102584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:19:37.513 [2024-02-13 07:20:11.102685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:37.513 [2024-02-13 07:20:11.102727] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:19:37.513 [2024-02-13 07:20:11.102754] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:37.513 [2024-02-13 07:20:11.103256] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:37.513 [2024-02-13 07:20:11.103345] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:19:37.513 [2024-02-13 07:20:11.103449] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:19:37.513 [2024-02-13 07:20:11.103479] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:19:37.513 [2024-02-13 07:20:11.103671] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:19:37.513 [2024-02-13 07:20:11.103687] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:19:37.513 [2024-02-13 07:20:11.103798] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:19:37.513 [2024-02-13 07:20:11.104162] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:19:37.513 [2024-02-13 07:20:11.104188] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:19:37.513 [2024-02-13 07:20:11.104340] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:37.513 pt3 00:19:37.513 07:20:11 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:19:37.513 07:20:11 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:19:37.513 07:20:11 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:19:37.513 07:20:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:37.513 07:20:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:37.513 07:20:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:37.513 07:20:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:37.513 07:20:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:19:37.513 07:20:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:37.513 07:20:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:37.513 07:20:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:37.513 07:20:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:37.513 07:20:11 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:37.513 07:20:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:37.773 07:20:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:37.773 "name": "raid_bdev1", 00:19:37.773 "uuid": "1e596a99-dd00-4c83-a217-2b4aa094de16", 00:19:37.773 "strip_size_kb": 0, 00:19:37.773 "state": "online", 00:19:37.773 "raid_level": "raid1", 00:19:37.773 "superblock": true, 00:19:37.773 "num_base_bdevs": 4, 00:19:37.773 "num_base_bdevs_discovered": 3, 00:19:37.773 "num_base_bdevs_operational": 3, 00:19:37.773 "base_bdevs_list": [ 00:19:37.773 { 00:19:37.773 "name": null, 00:19:37.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.773 "is_configured": false, 00:19:37.773 "data_offset": 2048, 00:19:37.773 "data_size": 63488 00:19:37.773 }, 00:19:37.773 { 00:19:37.773 "name": "pt2", 00:19:37.773 "uuid": "9d1376a0-3c0f-5e63-b798-623121ede69b", 00:19:37.773 "is_configured": true, 00:19:37.773 "data_offset": 2048, 00:19:37.773 "data_size": 63488 00:19:37.773 }, 00:19:37.773 { 00:19:37.773 "name": "pt3", 00:19:37.773 "uuid": "1a629e1e-7bf9-5b9c-8dbb-4edf7cf5a5f7", 00:19:37.773 "is_configured": true, 00:19:37.773 "data_offset": 2048, 00:19:37.773 "data_size": 63488 00:19:37.773 }, 00:19:37.773 { 00:19:37.773 "name": "pt4", 00:19:37.773 "uuid": "a444df3c-defa-5260-84eb-874d6a665233", 00:19:37.773 "is_configured": true, 00:19:37.773 "data_offset": 2048, 00:19:37.773 "data_size": 63488 00:19:37.773 } 00:19:37.773 ] 00:19:37.773 }' 00:19:37.773 07:20:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:37.773 07:20:11 -- common/autotest_common.sh@10 -- # set +x 00:19:38.340 07:20:11 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:38.340 07:20:11 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:19:38.599 [2024-02-13 07:20:12.203190] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:38.599 07:20:12 -- bdev/bdev_raid.sh@506 -- # '[' 1e596a99-dd00-4c83-a217-2b4aa094de16 '!=' 1e596a99-dd00-4c83-a217-2b4aa094de16 ']' 00:19:38.599 07:20:12 -- bdev/bdev_raid.sh@511 -- # killprocess 126539 00:19:38.599 07:20:12 -- common/autotest_common.sh@924 -- # '[' -z 126539 ']' 00:19:38.599 07:20:12 -- common/autotest_common.sh@928 -- # kill -0 126539 00:19:38.599 07:20:12 -- common/autotest_common.sh@929 -- # uname 00:19:38.599 07:20:12 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:19:38.599 07:20:12 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 126539 00:19:38.599 killing process with pid 126539 00:19:38.599 07:20:12 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:19:38.599 07:20:12 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:19:38.599 07:20:12 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 126539' 00:19:38.599 07:20:12 -- common/autotest_common.sh@943 -- # kill 126539 00:19:38.599 07:20:12 -- common/autotest_common.sh@948 -- # wait 126539 00:19:38.599 [2024-02-13 07:20:12.237585] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:38.599 [2024-02-13 07:20:12.237740] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.599 [2024-02-13 07:20:12.237863] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.599 [2024-02-13 07:20:12.237888] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:19:39.166 [2024-02-13 07:20:12.571624] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:40.126 ************************************ 00:19:40.126 END TEST raid_superblock_test 00:19:40.126 ************************************ 00:19:40.126 07:20:13 -- bdev/bdev_raid.sh@513 -- # return 0 00:19:40.126 00:19:40.126 real 0m23.360s 00:19:40.126 user 0m43.120s 00:19:40.126 sys 0m2.591s 00:19:40.126 07:20:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:40.126 07:20:13 -- common/autotest_common.sh@10 -- # set +x 00:19:40.126 07:20:13 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:19:40.126 07:20:13 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:19:40.126 07:20:13 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:19:40.126 07:20:13 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:19:40.126 07:20:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:19:40.126 07:20:13 -- common/autotest_common.sh@10 -- # set +x 00:19:40.385 ************************************ 00:19:40.385 START TEST raid_rebuild_test 00:19:40.385 ************************************ 00:19:40.385 07:20:13 -- common/autotest_common.sh@1102 -- # raid_rebuild_test raid1 2 false false 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@544 -- # raid_pid=127264 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@545 -- # waitforlisten 127264 /var/tmp/spdk-raid.sock 00:19:40.385 07:20:13 -- common/autotest_common.sh@817 -- # '[' -z 127264 ']' 00:19:40.385 07:20:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:19:40.385 07:20:13 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:40.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:19:40.385 07:20:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:40.385 07:20:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:19:40.385 07:20:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:40.385 07:20:13 -- common/autotest_common.sh@10 -- # set +x 00:19:40.385 [2024-02-13 07:20:13.907554] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:19:40.385 [2024-02-13 07:20:13.907789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127264 ] 00:19:40.385 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:40.385 Zero copy mechanism will not be used. 00:19:40.645 [2024-02-13 07:20:14.078617] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.645 [2024-02-13 07:20:14.255929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.904 [2024-02-13 07:20:14.474878] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:41.162 07:20:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:41.162 07:20:14 -- common/autotest_common.sh@850 -- # return 0 00:19:41.162 07:20:14 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:41.162 07:20:14 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:41.162 07:20:14 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:19:41.421 BaseBdev1 00:19:41.421 07:20:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:19:41.421 07:20:15 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:19:41.421 07:20:15 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:19:41.680 BaseBdev2 00:19:41.681 07:20:15 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:19:41.940 spare_malloc 00:19:41.940 07:20:15 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:42.199 spare_delay 00:19:42.199 07:20:15 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:19:42.458 [2024-02-13 07:20:16.046129] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:42.458 [2024-02-13 07:20:16.046264] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.458 [2024-02-13 07:20:16.046315] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:42.458 [2024-02-13 07:20:16.046394] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.458 [2024-02-13 07:20:16.049506] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.458 [2024-02-13 07:20:16.049601] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:42.458 spare 00:19:42.458 07:20:16 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:19:42.717 [2024-02-13 07:20:16.266333] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:42.717 [2024-02-13 07:20:16.268201] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:42.717 [2024-02-13 07:20:16.268315] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:19:42.717 [2024-02-13 07:20:16.268328] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:19:42.717 [2024-02-13 07:20:16.268508] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:19:42.717 [2024-02-13 07:20:16.268839] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:19:42.717 [2024-02-13 07:20:16.268862] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:19:42.717 [2024-02-13 07:20:16.269029] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.717 07:20:16 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:42.717 07:20:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:42.717 07:20:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:42.717 07:20:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:42.717 07:20:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:42.717 07:20:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:42.717 07:20:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:42.717 07:20:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:42.717 07:20:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:42.717 07:20:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:42.717 07:20:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:42.717 07:20:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.976 07:20:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:42.976 "name": "raid_bdev1", 00:19:42.976 "uuid": "2d04293e-2fc8-462f-bb7e-f922406ec4c2", 00:19:42.976 "strip_size_kb": 0, 00:19:42.976 "state": "online", 00:19:42.976 "raid_level": "raid1", 00:19:42.976 "superblock": false, 00:19:42.976 "num_base_bdevs": 2, 00:19:42.976 "num_base_bdevs_discovered": 2, 00:19:42.976 "num_base_bdevs_operational": 2, 00:19:42.976 "base_bdevs_list": [ 00:19:42.976 { 00:19:42.976 "name": "BaseBdev1", 00:19:42.976 "uuid": "989f3acc-784d-4ddd-a9ba-739ef8ec9792", 00:19:42.976 "is_configured": true, 00:19:42.976 "data_offset": 0, 00:19:42.976 "data_size": 65536 00:19:42.976 }, 00:19:42.976 { 00:19:42.976 "name": "BaseBdev2", 00:19:42.976 "uuid": "ede43ef7-2879-4907-9a08-dce92083ed81", 00:19:42.976 "is_configured": true, 00:19:42.976 "data_offset": 0, 00:19:42.976 "data_size": 65536 00:19:42.976 } 00:19:42.976 ] 00:19:42.976 }' 00:19:42.976 07:20:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:42.976 07:20:16 -- common/autotest_common.sh@10 -- # set +x 00:19:43.543 07:20:17 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:19:43.543 07:20:17 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:19:43.802 [2024-02-13 07:20:17.474995] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:43.802 07:20:17 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:19:43.802 07:20:17 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:43.802 07:20:17 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:44.061 07:20:17 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:19:44.061 07:20:17 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:19:44.061 07:20:17 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:19:44.061 07:20:17 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:19:44.061 07:20:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:44.061 07:20:17 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:19:44.061 07:20:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:44.061 07:20:17 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:19:44.061 07:20:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:44.061 07:20:17 -- bdev/nbd_common.sh@12 -- # local i 00:19:44.061 07:20:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:44.061 07:20:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:44.061 07:20:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:44.320 [2024-02-13 07:20:17.910948] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:19:44.320 /dev/nbd0 00:19:44.321 07:20:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:44.321 07:20:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:44.321 07:20:17 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:19:44.321 07:20:17 -- common/autotest_common.sh@855 -- # local i 00:19:44.321 07:20:17 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:19:44.321 07:20:17 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:19:44.321 07:20:17 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:19:44.321 07:20:17 -- common/autotest_common.sh@859 -- # break 00:19:44.321 07:20:17 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:19:44.321 07:20:17 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:19:44.321 07:20:17 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:44.321 1+0 records in 00:19:44.321 1+0 records out 00:19:44.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420284 s, 9.7 MB/s 00:19:44.321 07:20:17 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.321 07:20:17 -- common/autotest_common.sh@872 -- # size=4096 00:19:44.321 07:20:17 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:44.321 07:20:17 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:19:44.321 07:20:17 -- common/autotest_common.sh@875 -- # return 0 00:19:44.321 07:20:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:44.321 07:20:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:44.321 07:20:17 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:19:44.321 07:20:17 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:19:44.321 07:20:17 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:19:49.590 65536+0 records in 00:19:49.590 65536+0 records out 00:19:49.590 33554432 bytes (34 MB, 32 MiB) copied, 4.5872 s, 7.3 MB/s 00:19:49.590 07:20:22 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:19:49.590 07:20:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:49.590 07:20:22 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:19:49.590 07:20:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:49.590 07:20:22 -- bdev/nbd_common.sh@51 -- # local i 00:19:49.590 07:20:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.590 07:20:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:19:49.590 07:20:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:49.590 07:20:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:49.590 07:20:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:49.590 07:20:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.590 07:20:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.590 07:20:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:49.590 [2024-02-13 07:20:22.826493] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.590 07:20:22 -- bdev/nbd_common.sh@41 -- # break 00:19:49.590 07:20:22 -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.590 07:20:22 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:19:49.590 [2024-02-13 07:20:23.042217] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:49.590 07:20:23 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:49.590 07:20:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:49.590 07:20:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:49.590 07:20:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:49.590 07:20:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:49.590 07:20:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:49.590 07:20:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:49.590 07:20:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:49.590 07:20:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:49.590 07:20:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:49.590 07:20:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:49.590 07:20:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.848 07:20:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:49.848 "name": "raid_bdev1", 00:19:49.848 "uuid": "2d04293e-2fc8-462f-bb7e-f922406ec4c2", 00:19:49.848 "strip_size_kb": 0, 00:19:49.848 "state": "online", 00:19:49.848 "raid_level": "raid1", 00:19:49.848 "superblock": false, 00:19:49.848 "num_base_bdevs": 2, 00:19:49.848 "num_base_bdevs_discovered": 1, 00:19:49.848 "num_base_bdevs_operational": 1, 00:19:49.848 "base_bdevs_list": [ 00:19:49.848 { 00:19:49.848 "name": null, 00:19:49.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.848 "is_configured": false, 00:19:49.848 "data_offset": 0, 00:19:49.848 "data_size": 65536 00:19:49.848 }, 00:19:49.848 { 00:19:49.848 "name": "BaseBdev2", 00:19:49.848 "uuid": "ede43ef7-2879-4907-9a08-dce92083ed81", 00:19:49.848 "is_configured": true, 00:19:49.848 "data_offset": 0, 00:19:49.848 "data_size": 65536 00:19:49.848 } 00:19:49.848 ] 00:19:49.848 }' 00:19:49.848 07:20:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:49.848 07:20:23 -- common/autotest_common.sh@10 -- # set +x 00:19:50.415 07:20:23 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:50.674 [2024-02-13 07:20:24.206638] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:50.674 [2024-02-13 07:20:24.206717] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.674 [2024-02-13 07:20:24.220356] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b500 00:19:50.674 [2024-02-13 07:20:24.222635] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:50.674 07:20:24 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:19:51.607 07:20:25 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.607 07:20:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:51.607 07:20:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:51.608 07:20:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:51.608 07:20:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:51.608 07:20:25 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:51.608 07:20:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.866 07:20:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:51.866 "name": "raid_bdev1", 00:19:51.866 "uuid": "2d04293e-2fc8-462f-bb7e-f922406ec4c2", 00:19:51.866 "strip_size_kb": 0, 00:19:51.866 "state": "online", 00:19:51.866 "raid_level": "raid1", 00:19:51.866 "superblock": false, 00:19:51.866 "num_base_bdevs": 2, 00:19:51.866 "num_base_bdevs_discovered": 2, 00:19:51.866 "num_base_bdevs_operational": 2, 00:19:51.866 "process": { 00:19:51.866 "type": "rebuild", 00:19:51.866 "target": "spare", 00:19:51.866 "progress": { 00:19:51.866 "blocks": 22528, 00:19:51.866 "percent": 34 00:19:51.866 } 00:19:51.866 }, 00:19:51.866 "base_bdevs_list": [ 00:19:51.866 { 00:19:51.866 "name": "spare", 00:19:51.866 "uuid": "d9546404-0d60-5e27-b7d8-1f33010b69ff", 00:19:51.866 "is_configured": true, 00:19:51.866 "data_offset": 0, 00:19:51.866 "data_size": 65536 00:19:51.866 }, 00:19:51.866 { 00:19:51.866 "name": "BaseBdev2", 00:19:51.866 "uuid": "ede43ef7-2879-4907-9a08-dce92083ed81", 00:19:51.866 "is_configured": true, 00:19:51.866 "data_offset": 0, 00:19:51.867 "data_size": 65536 00:19:51.867 } 00:19:51.867 ] 00:19:51.867 }' 00:19:51.867 07:20:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:51.867 07:20:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.867 07:20:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:51.867 07:20:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.867 07:20:25 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:19:52.125 [2024-02-13 07:20:25.808048] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:52.383 [2024-02-13 07:20:25.831838] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:52.383 [2024-02-13 07:20:25.831952] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:52.383 07:20:25 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:52.383 07:20:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:52.383 07:20:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:52.383 07:20:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:52.383 07:20:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:52.383 07:20:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:19:52.383 07:20:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:52.383 07:20:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:52.383 07:20:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:52.383 07:20:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:52.383 07:20:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:52.383 07:20:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.642 07:20:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:52.642 "name": "raid_bdev1", 00:19:52.642 "uuid": "2d04293e-2fc8-462f-bb7e-f922406ec4c2", 00:19:52.642 "strip_size_kb": 0, 00:19:52.642 "state": "online", 00:19:52.642 "raid_level": "raid1", 00:19:52.642 "superblock": false, 00:19:52.642 "num_base_bdevs": 2, 00:19:52.642 "num_base_bdevs_discovered": 1, 00:19:52.642 "num_base_bdevs_operational": 1, 00:19:52.642 "base_bdevs_list": [ 00:19:52.642 { 00:19:52.642 "name": null, 00:19:52.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.642 "is_configured": false, 00:19:52.642 "data_offset": 0, 00:19:52.642 "data_size": 65536 00:19:52.642 }, 00:19:52.642 { 00:19:52.642 "name": "BaseBdev2", 00:19:52.642 "uuid": "ede43ef7-2879-4907-9a08-dce92083ed81", 00:19:52.642 "is_configured": true, 00:19:52.642 "data_offset": 0, 00:19:52.642 "data_size": 65536 00:19:52.642 } 00:19:52.642 ] 00:19:52.642 }' 00:19:52.642 07:20:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:52.642 07:20:26 -- common/autotest_common.sh@10 -- # set +x 00:19:53.209 07:20:26 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:53.209 07:20:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:53.209 07:20:26 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:53.209 07:20:26 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:53.209 07:20:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:53.209 07:20:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:53.209 07:20:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.469 07:20:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:53.469 "name": "raid_bdev1", 00:19:53.469 "uuid": "2d04293e-2fc8-462f-bb7e-f922406ec4c2", 00:19:53.469 "strip_size_kb": 0, 00:19:53.469 "state": "online", 00:19:53.469 "raid_level": "raid1", 00:19:53.469 "superblock": false, 00:19:53.469 "num_base_bdevs": 2, 00:19:53.469 "num_base_bdevs_discovered": 1, 00:19:53.469 "num_base_bdevs_operational": 1, 00:19:53.469 "base_bdevs_list": [ 00:19:53.469 { 00:19:53.469 "name": null, 00:19:53.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:53.469 "is_configured": false, 00:19:53.469 "data_offset": 0, 00:19:53.469 "data_size": 65536 00:19:53.469 }, 00:19:53.469 { 00:19:53.469 "name": "BaseBdev2", 00:19:53.469 "uuid": "ede43ef7-2879-4907-9a08-dce92083ed81", 00:19:53.469 "is_configured": true, 00:19:53.469 "data_offset": 0, 00:19:53.469 "data_size": 65536 00:19:53.469 } 00:19:53.469 ] 00:19:53.469 }' 00:19:53.469 07:20:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:53.469 07:20:27 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:53.469 07:20:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:53.469 07:20:27 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:53.469 07:20:27 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:19:53.728 [2024-02-13 07:20:27.404608] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:19:53.728 [2024-02-13 07:20:27.404654] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:53.728 [2024-02-13 07:20:27.416520] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b6a0 00:19:53.728 [2024-02-13 07:20:27.418511] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:53.989 07:20:27 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:19:54.924 07:20:28 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.924 07:20:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:54.924 07:20:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:54.924 07:20:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:54.924 07:20:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:54.924 07:20:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:54.924 07:20:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:55.183 "name": "raid_bdev1", 00:19:55.183 "uuid": "2d04293e-2fc8-462f-bb7e-f922406ec4c2", 00:19:55.183 "strip_size_kb": 0, 00:19:55.183 "state": "online", 00:19:55.183 "raid_level": "raid1", 00:19:55.183 "superblock": false, 00:19:55.183 "num_base_bdevs": 2, 00:19:55.183 "num_base_bdevs_discovered": 2, 00:19:55.183 "num_base_bdevs_operational": 2, 00:19:55.183 "process": { 00:19:55.183 "type": "rebuild", 00:19:55.183 "target": "spare", 00:19:55.183 "progress": { 00:19:55.183 "blocks": 24576, 00:19:55.183 "percent": 37 00:19:55.183 } 00:19:55.183 }, 00:19:55.183 "base_bdevs_list": [ 00:19:55.183 { 00:19:55.183 "name": "spare", 00:19:55.183 "uuid": "d9546404-0d60-5e27-b7d8-1f33010b69ff", 00:19:55.183 "is_configured": true, 00:19:55.183 "data_offset": 0, 00:19:55.183 "data_size": 65536 00:19:55.183 }, 00:19:55.183 { 00:19:55.183 "name": "BaseBdev2", 00:19:55.183 "uuid": "ede43ef7-2879-4907-9a08-dce92083ed81", 00:19:55.183 "is_configured": true, 00:19:55.183 "data_offset": 0, 00:19:55.183 "data_size": 65536 00:19:55.183 } 00:19:55.183 ] 00:19:55.183 }' 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@657 -- # local timeout=409 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:55.183 07:20:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.442 07:20:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:55.442 "name": "raid_bdev1", 00:19:55.442 "uuid": "2d04293e-2fc8-462f-bb7e-f922406ec4c2", 00:19:55.442 "strip_size_kb": 0, 00:19:55.442 "state": "online", 00:19:55.442 "raid_level": "raid1", 00:19:55.442 "superblock": false, 00:19:55.442 "num_base_bdevs": 2, 00:19:55.442 "num_base_bdevs_discovered": 2, 00:19:55.442 "num_base_bdevs_operational": 2, 00:19:55.442 "process": { 00:19:55.442 "type": "rebuild", 00:19:55.442 "target": "spare", 00:19:55.442 "progress": { 00:19:55.442 "blocks": 30720, 00:19:55.442 "percent": 46 00:19:55.442 } 00:19:55.442 }, 00:19:55.442 "base_bdevs_list": [ 00:19:55.442 { 00:19:55.442 "name": "spare", 00:19:55.442 "uuid": "d9546404-0d60-5e27-b7d8-1f33010b69ff", 00:19:55.442 "is_configured": true, 00:19:55.442 "data_offset": 0, 00:19:55.442 "data_size": 65536 00:19:55.442 }, 00:19:55.442 { 00:19:55.442 "name": "BaseBdev2", 00:19:55.442 "uuid": "ede43ef7-2879-4907-9a08-dce92083ed81", 00:19:55.442 "is_configured": true, 00:19:55.442 "data_offset": 0, 00:19:55.442 "data_size": 65536 00:19:55.442 } 00:19:55.442 ] 00:19:55.442 }' 00:19:55.442 07:20:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:55.442 07:20:29 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:55.442 07:20:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:55.700 07:20:29 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:55.700 07:20:29 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:56.670 07:20:30 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:56.670 07:20:30 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:56.670 07:20:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:56.670 07:20:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:56.670 07:20:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:56.670 07:20:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:56.670 07:20:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:56.670 07:20:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:56.929 07:20:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:56.929 "name": "raid_bdev1", 00:19:56.929 "uuid": "2d04293e-2fc8-462f-bb7e-f922406ec4c2", 00:19:56.929 "strip_size_kb": 0, 00:19:56.929 "state": "online", 00:19:56.929 "raid_level": "raid1", 00:19:56.929 "superblock": false, 00:19:56.929 "num_base_bdevs": 2, 00:19:56.929 "num_base_bdevs_discovered": 2, 00:19:56.929 "num_base_bdevs_operational": 2, 00:19:56.929 "process": { 00:19:56.929 "type": "rebuild", 00:19:56.929 "target": "spare", 00:19:56.929 "progress": { 00:19:56.929 "blocks": 59392, 00:19:56.929 "percent": 90 00:19:56.929 } 00:19:56.929 }, 00:19:56.929 "base_bdevs_list": [ 00:19:56.929 { 00:19:56.929 "name": "spare", 00:19:56.929 "uuid": "d9546404-0d60-5e27-b7d8-1f33010b69ff", 00:19:56.929 "is_configured": true, 00:19:56.929 "data_offset": 0, 00:19:56.929 "data_size": 65536 00:19:56.929 }, 00:19:56.929 { 00:19:56.929 "name": "BaseBdev2", 00:19:56.929 "uuid": "ede43ef7-2879-4907-9a08-dce92083ed81", 00:19:56.929 "is_configured": true, 00:19:56.929 "data_offset": 0, 00:19:56.929 "data_size": 65536 00:19:56.929 } 00:19:56.929 ] 00:19:56.929 }' 00:19:56.929 07:20:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:56.929 07:20:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:56.929 07:20:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:56.929 07:20:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:19:56.929 07:20:30 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:19:57.187 [2024-02-13 07:20:30.637376] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:57.187 [2024-02-13 07:20:30.637447] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:57.187 [2024-02-13 07:20:30.637524] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.123 07:20:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:19:58.123 07:20:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:58.123 07:20:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:58.123 07:20:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:19:58.123 07:20:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:19:58.123 07:20:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:58.123 07:20:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.123 07:20:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.123 07:20:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:58.123 "name": "raid_bdev1", 00:19:58.123 "uuid": "2d04293e-2fc8-462f-bb7e-f922406ec4c2", 00:19:58.123 "strip_size_kb": 0, 00:19:58.123 "state": "online", 00:19:58.123 "raid_level": "raid1", 00:19:58.123 "superblock": false, 00:19:58.123 "num_base_bdevs": 2, 00:19:58.123 "num_base_bdevs_discovered": 2, 00:19:58.123 "num_base_bdevs_operational": 2, 00:19:58.123 "base_bdevs_list": [ 00:19:58.123 { 00:19:58.123 "name": "spare", 00:19:58.123 "uuid": "d9546404-0d60-5e27-b7d8-1f33010b69ff", 00:19:58.123 "is_configured": true, 00:19:58.123 "data_offset": 0, 00:19:58.123 "data_size": 65536 00:19:58.123 }, 00:19:58.123 { 00:19:58.123 "name": "BaseBdev2", 00:19:58.123 "uuid": "ede43ef7-2879-4907-9a08-dce92083ed81", 00:19:58.123 "is_configured": true, 00:19:58.123 "data_offset": 0, 00:19:58.123 "data_size": 65536 00:19:58.123 } 00:19:58.123 ] 00:19:58.123 }' 00:19:58.123 07:20:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:58.382 07:20:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:58.382 07:20:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:58.382 07:20:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:19:58.382 07:20:31 -- bdev/bdev_raid.sh@660 -- # break 00:19:58.382 07:20:31 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.382 07:20:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:19:58.382 07:20:31 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:19:58.382 07:20:31 -- bdev/bdev_raid.sh@185 -- # local target=none 00:19:58.382 07:20:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:19:58.382 07:20:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.382 07:20:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.641 07:20:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:19:58.641 "name": "raid_bdev1", 00:19:58.641 "uuid": "2d04293e-2fc8-462f-bb7e-f922406ec4c2", 00:19:58.641 "strip_size_kb": 0, 00:19:58.641 "state": "online", 00:19:58.641 "raid_level": "raid1", 00:19:58.641 "superblock": false, 00:19:58.641 "num_base_bdevs": 2, 00:19:58.641 "num_base_bdevs_discovered": 2, 00:19:58.641 "num_base_bdevs_operational": 2, 00:19:58.641 "base_bdevs_list": [ 00:19:58.641 { 00:19:58.641 "name": "spare", 00:19:58.641 "uuid": "d9546404-0d60-5e27-b7d8-1f33010b69ff", 00:19:58.641 "is_configured": true, 00:19:58.641 "data_offset": 0, 00:19:58.641 "data_size": 65536 00:19:58.641 }, 00:19:58.641 { 00:19:58.641 "name": "BaseBdev2", 00:19:58.641 "uuid": "ede43ef7-2879-4907-9a08-dce92083ed81", 00:19:58.641 "is_configured": true, 00:19:58.641 "data_offset": 0, 00:19:58.641 "data_size": 65536 00:19:58.641 } 00:19:58.641 ] 00:19:58.641 }' 00:19:58.641 07:20:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:19:58.641 07:20:32 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:19:58.641 07:20:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:19:58.641 07:20:32 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:19:58.641 07:20:32 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:58.641 07:20:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:19:58.641 07:20:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:19:58.641 07:20:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:19:58.641 07:20:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:19:58.641 07:20:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:19:58.641 07:20:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:19:58.641 07:20:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:19:58.641 07:20:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:19:58.641 07:20:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:19:58.641 07:20:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:58.641 07:20:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.900 07:20:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:19:58.900 "name": "raid_bdev1", 00:19:58.900 "uuid": "2d04293e-2fc8-462f-bb7e-f922406ec4c2", 00:19:58.900 "strip_size_kb": 0, 00:19:58.900 "state": "online", 00:19:58.900 "raid_level": "raid1", 00:19:58.900 "superblock": false, 00:19:58.900 "num_base_bdevs": 2, 00:19:58.900 "num_base_bdevs_discovered": 2, 00:19:58.900 "num_base_bdevs_operational": 2, 00:19:58.900 "base_bdevs_list": [ 00:19:58.900 { 00:19:58.900 "name": "spare", 00:19:58.900 "uuid": "d9546404-0d60-5e27-b7d8-1f33010b69ff", 00:19:58.900 "is_configured": true, 00:19:58.900 "data_offset": 0, 00:19:58.900 "data_size": 65536 00:19:58.900 }, 00:19:58.900 { 00:19:58.900 "name": "BaseBdev2", 00:19:58.900 "uuid": "ede43ef7-2879-4907-9a08-dce92083ed81", 00:19:58.900 "is_configured": true, 00:19:58.900 "data_offset": 0, 00:19:58.900 "data_size": 65536 00:19:58.900 } 00:19:58.900 ] 00:19:58.900 }' 00:19:58.900 07:20:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:19:58.900 07:20:32 -- common/autotest_common.sh@10 -- # set +x 00:19:59.467 07:20:33 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:19:59.726 [2024-02-13 07:20:33.281966] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:59.726 [2024-02-13 07:20:33.282013] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:59.726 [2024-02-13 07:20:33.282106] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:59.726 [2024-02-13 07:20:33.282183] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:59.726 [2024-02-13 07:20:33.282196] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:19:59.726 07:20:33 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:19:59.726 07:20:33 -- bdev/bdev_raid.sh@671 -- # jq length 00:19:59.985 07:20:33 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:19:59.985 07:20:33 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:19:59.985 07:20:33 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:59.985 07:20:33 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:19:59.985 07:20:33 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:19:59.985 07:20:33 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:59.985 07:20:33 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:19:59.985 07:20:33 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:59.985 07:20:33 -- bdev/nbd_common.sh@12 -- # local i 00:19:59.985 07:20:33 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:59.985 07:20:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:59.985 07:20:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:00.243 /dev/nbd0 00:20:00.243 07:20:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:00.243 07:20:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:00.243 07:20:33 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:20:00.243 07:20:33 -- common/autotest_common.sh@855 -- # local i 00:20:00.243 07:20:33 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:00.243 07:20:33 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:00.243 07:20:33 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:20:00.243 07:20:33 -- common/autotest_common.sh@859 -- # break 00:20:00.243 07:20:33 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:00.243 07:20:33 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:00.243 07:20:33 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:00.243 1+0 records in 00:20:00.243 1+0 records out 00:20:00.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022047 s, 18.6 MB/s 00:20:00.243 07:20:33 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:00.243 07:20:33 -- common/autotest_common.sh@872 -- # size=4096 00:20:00.243 07:20:33 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:00.243 07:20:33 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:00.243 07:20:33 -- common/autotest_common.sh@875 -- # return 0 00:20:00.243 07:20:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:00.243 07:20:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:00.243 07:20:33 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:00.502 /dev/nbd1 00:20:00.502 07:20:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:00.502 07:20:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:00.502 07:20:34 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:20:00.502 07:20:34 -- common/autotest_common.sh@855 -- # local i 00:20:00.502 07:20:34 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:00.502 07:20:34 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:00.502 07:20:34 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:20:00.502 07:20:34 -- common/autotest_common.sh@859 -- # break 00:20:00.502 07:20:34 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:00.502 07:20:34 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:00.502 07:20:34 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:00.502 1+0 records in 00:20:00.502 1+0 records out 00:20:00.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353379 s, 11.6 MB/s 00:20:00.502 07:20:34 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:00.502 07:20:34 -- common/autotest_common.sh@872 -- # size=4096 00:20:00.502 07:20:34 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:00.502 07:20:34 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:00.502 07:20:34 -- common/autotest_common.sh@875 -- # return 0 00:20:00.502 07:20:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:00.502 07:20:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:00.502 07:20:34 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:00.760 07:20:34 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:00.760 07:20:34 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:00.760 07:20:34 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:00.760 07:20:34 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:00.760 07:20:34 -- bdev/nbd_common.sh@51 -- # local i 00:20:00.760 07:20:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:00.760 07:20:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:00.760 07:20:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:01.019 07:20:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:01.019 07:20:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:01.019 07:20:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:01.019 07:20:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:01.019 07:20:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:01.019 07:20:34 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:01.019 07:20:34 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:01.019 07:20:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:01.019 07:20:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:01.019 07:20:34 -- bdev/nbd_common.sh@41 -- # break 00:20:01.019 07:20:34 -- bdev/nbd_common.sh@45 -- # return 0 00:20:01.019 07:20:34 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:01.019 07:20:34 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:01.277 07:20:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:01.277 07:20:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:01.277 07:20:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:01.277 07:20:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:01.277 07:20:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:01.277 07:20:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:01.277 07:20:34 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:01.277 07:20:34 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:01.277 07:20:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:01.277 07:20:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:01.277 07:20:34 -- bdev/nbd_common.sh@41 -- # break 00:20:01.277 07:20:34 -- bdev/nbd_common.sh@45 -- # return 0 00:20:01.277 07:20:34 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:01.277 07:20:34 -- bdev/bdev_raid.sh@709 -- # killprocess 127264 00:20:01.277 07:20:34 -- common/autotest_common.sh@924 -- # '[' -z 127264 ']' 00:20:01.277 07:20:34 -- common/autotest_common.sh@928 -- # kill -0 127264 00:20:01.277 07:20:34 -- common/autotest_common.sh@929 -- # uname 00:20:01.277 07:20:34 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:20:01.277 07:20:34 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 127264 00:20:01.277 07:20:34 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:20:01.277 killing process with pid 127264 00:20:01.277 07:20:34 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:20:01.277 07:20:34 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 127264' 00:20:01.277 Received shutdown signal, test time was about 60.000000 seconds 00:20:01.277 00:20:01.277 Latency(us) 00:20:01.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.277 =================================================================================================================== 00:20:01.277 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:01.277 07:20:34 -- common/autotest_common.sh@943 -- # kill 127264 00:20:01.277 07:20:34 -- common/autotest_common.sh@948 -- # wait 127264 00:20:01.277 [2024-02-13 07:20:34.964137] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:01.536 [2024-02-13 07:20:35.166277] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:02.471 ************************************ 00:20:02.471 END TEST raid_rebuild_test 00:20:02.471 ************************************ 00:20:02.471 07:20:36 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:02.471 00:20:02.471 real 0m22.317s 00:20:02.471 user 0m30.967s 00:20:02.471 sys 0m3.881s 00:20:02.471 07:20:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:02.471 07:20:36 -- common/autotest_common.sh@10 -- # set +x 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:20:02.730 07:20:36 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:20:02.730 07:20:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:20:02.730 07:20:36 -- common/autotest_common.sh@10 -- # set +x 00:20:02.730 ************************************ 00:20:02.730 START TEST raid_rebuild_test_sb 00:20:02.730 ************************************ 00:20:02.730 07:20:36 -- common/autotest_common.sh@1102 -- # raid_rebuild_test raid1 2 true false 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@544 -- # raid_pid=127873 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@545 -- # waitforlisten 127873 /var/tmp/spdk-raid.sock 00:20:02.730 07:20:36 -- common/autotest_common.sh@817 -- # '[' -z 127873 ']' 00:20:02.730 07:20:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:02.730 07:20:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:02.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:02.730 07:20:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:02.730 07:20:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:02.730 07:20:36 -- common/autotest_common.sh@10 -- # set +x 00:20:02.730 07:20:36 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:02.730 [2024-02-13 07:20:36.281685] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:20:02.730 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:02.730 Zero copy mechanism will not be used. 00:20:02.730 [2024-02-13 07:20:36.281915] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid127873 ] 00:20:02.989 [2024-02-13 07:20:36.447121] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.989 [2024-02-13 07:20:36.634577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.247 [2024-02-13 07:20:36.810225] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:03.815 07:20:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:03.815 07:20:37 -- common/autotest_common.sh@850 -- # return 0 00:20:03.815 07:20:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:03.815 07:20:37 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:03.815 07:20:37 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:04.074 BaseBdev1_malloc 00:20:04.074 07:20:37 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:04.074 [2024-02-13 07:20:37.734916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:04.074 [2024-02-13 07:20:37.735042] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.074 [2024-02-13 07:20:37.735081] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:04.074 [2024-02-13 07:20:37.735129] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.074 [2024-02-13 07:20:37.737359] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.074 [2024-02-13 07:20:37.737407] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:04.074 BaseBdev1 00:20:04.074 07:20:37 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:04.074 07:20:37 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:04.074 07:20:37 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:04.333 BaseBdev2_malloc 00:20:04.616 07:20:38 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:04.616 [2024-02-13 07:20:38.220323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:04.616 [2024-02-13 07:20:38.220462] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:04.616 [2024-02-13 07:20:38.220507] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:04.616 [2024-02-13 07:20:38.220560] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:04.616 [2024-02-13 07:20:38.222770] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:04.616 [2024-02-13 07:20:38.222852] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:04.616 BaseBdev2 00:20:04.616 07:20:38 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:04.876 spare_malloc 00:20:04.876 07:20:38 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:05.134 spare_delay 00:20:05.134 07:20:38 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:05.393 [2024-02-13 07:20:38.966323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:05.393 [2024-02-13 07:20:38.966436] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:05.393 [2024-02-13 07:20:38.966475] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:05.394 [2024-02-13 07:20:38.966515] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:05.394 [2024-02-13 07:20:38.968566] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:05.394 [2024-02-13 07:20:38.968639] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:05.394 spare 00:20:05.394 07:20:38 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:05.653 [2024-02-13 07:20:39.162416] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:05.653 [2024-02-13 07:20:39.164097] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:05.653 [2024-02-13 07:20:39.164376] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:20:05.653 [2024-02-13 07:20:39.164411] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:05.653 [2024-02-13 07:20:39.164549] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:05.653 [2024-02-13 07:20:39.164896] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:20:05.653 [2024-02-13 07:20:39.164927] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:20:05.653 [2024-02-13 07:20:39.165126] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:05.653 07:20:39 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:05.653 07:20:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:05.653 07:20:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:05.653 07:20:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:05.653 07:20:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:05.653 07:20:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:05.653 07:20:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:05.653 07:20:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:05.653 07:20:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:05.653 07:20:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:05.653 07:20:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:05.653 07:20:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:05.911 07:20:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:05.911 "name": "raid_bdev1", 00:20:05.911 "uuid": "453ce5c7-4d4d-4a85-b697-2a9bc536bceb", 00:20:05.911 "strip_size_kb": 0, 00:20:05.911 "state": "online", 00:20:05.911 "raid_level": "raid1", 00:20:05.911 "superblock": true, 00:20:05.911 "num_base_bdevs": 2, 00:20:05.911 "num_base_bdevs_discovered": 2, 00:20:05.912 "num_base_bdevs_operational": 2, 00:20:05.912 "base_bdevs_list": [ 00:20:05.912 { 00:20:05.912 "name": "BaseBdev1", 00:20:05.912 "uuid": "394ee5ad-8acc-517a-83bb-9a9cbd24515e", 00:20:05.912 "is_configured": true, 00:20:05.912 "data_offset": 2048, 00:20:05.912 "data_size": 63488 00:20:05.912 }, 00:20:05.912 { 00:20:05.912 "name": "BaseBdev2", 00:20:05.912 "uuid": "e2aad471-e719-5009-aa78-f9b277ee712f", 00:20:05.912 "is_configured": true, 00:20:05.912 "data_offset": 2048, 00:20:05.912 "data_size": 63488 00:20:05.912 } 00:20:05.912 ] 00:20:05.912 }' 00:20:05.912 07:20:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:05.912 07:20:39 -- common/autotest_common.sh@10 -- # set +x 00:20:06.479 07:20:40 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:06.479 07:20:40 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:06.738 [2024-02-13 07:20:40.250899] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:06.738 07:20:40 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:06.738 07:20:40 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:06.738 07:20:40 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:06.996 07:20:40 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:06.997 07:20:40 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:20:06.997 07:20:40 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:20:06.997 07:20:40 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:20:06.997 07:20:40 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:06.997 07:20:40 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:06.997 07:20:40 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:06.997 07:20:40 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:06.997 07:20:40 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:06.997 07:20:40 -- bdev/nbd_common.sh@12 -- # local i 00:20:06.997 07:20:40 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:06.997 07:20:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:06.997 07:20:40 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:06.997 [2024-02-13 07:20:40.642774] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:20:06.997 /dev/nbd0 00:20:06.997 07:20:40 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:06.997 07:20:40 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:06.997 07:20:40 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:20:06.997 07:20:40 -- common/autotest_common.sh@855 -- # local i 00:20:06.997 07:20:40 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:06.997 07:20:40 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:06.997 07:20:40 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:20:06.997 07:20:40 -- common/autotest_common.sh@859 -- # break 00:20:06.997 07:20:40 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:06.997 07:20:40 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:06.997 07:20:40 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:07.255 1+0 records in 00:20:07.255 1+0 records out 00:20:07.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502666 s, 8.1 MB/s 00:20:07.255 07:20:40 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:07.255 07:20:40 -- common/autotest_common.sh@872 -- # size=4096 00:20:07.255 07:20:40 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:07.255 07:20:40 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:07.255 07:20:40 -- common/autotest_common.sh@875 -- # return 0 00:20:07.255 07:20:40 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:07.255 07:20:40 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:07.255 07:20:40 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:20:07.255 07:20:40 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:20:07.255 07:20:40 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:20:12.522 63488+0 records in 00:20:12.522 63488+0 records out 00:20:12.522 32505856 bytes (33 MB, 31 MiB) copied, 4.88482 s, 6.7 MB/s 00:20:12.522 07:20:45 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:12.522 07:20:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:12.522 07:20:45 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:12.522 07:20:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:12.522 07:20:45 -- bdev/nbd_common.sh@51 -- # local i 00:20:12.522 07:20:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:12.522 07:20:45 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:12.522 07:20:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:12.523 07:20:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:12.523 07:20:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:12.523 07:20:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:12.523 07:20:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:12.523 07:20:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:12.523 07:20:45 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:12.523 [2024-02-13 07:20:45.782933] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:12.523 07:20:45 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:12.523 07:20:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:12.523 07:20:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:12.523 07:20:45 -- bdev/nbd_common.sh@41 -- # break 00:20:12.523 07:20:45 -- bdev/nbd_common.sh@45 -- # return 0 00:20:12.523 07:20:45 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:12.523 [2024-02-13 07:20:46.130440] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:12.523 07:20:46 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:12.523 07:20:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:12.523 07:20:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:12.523 07:20:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:12.523 07:20:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:12.523 07:20:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:12.523 07:20:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:12.523 07:20:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:12.523 07:20:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:12.523 07:20:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:12.523 07:20:46 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:12.523 07:20:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:12.781 07:20:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:12.781 "name": "raid_bdev1", 00:20:12.781 "uuid": "453ce5c7-4d4d-4a85-b697-2a9bc536bceb", 00:20:12.781 "strip_size_kb": 0, 00:20:12.781 "state": "online", 00:20:12.781 "raid_level": "raid1", 00:20:12.781 "superblock": true, 00:20:12.781 "num_base_bdevs": 2, 00:20:12.781 "num_base_bdevs_discovered": 1, 00:20:12.781 "num_base_bdevs_operational": 1, 00:20:12.781 "base_bdevs_list": [ 00:20:12.781 { 00:20:12.781 "name": null, 00:20:12.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:12.781 "is_configured": false, 00:20:12.781 "data_offset": 2048, 00:20:12.781 "data_size": 63488 00:20:12.781 }, 00:20:12.781 { 00:20:12.781 "name": "BaseBdev2", 00:20:12.781 "uuid": "e2aad471-e719-5009-aa78-f9b277ee712f", 00:20:12.781 "is_configured": true, 00:20:12.781 "data_offset": 2048, 00:20:12.781 "data_size": 63488 00:20:12.781 } 00:20:12.781 ] 00:20:12.781 }' 00:20:12.781 07:20:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:12.781 07:20:46 -- common/autotest_common.sh@10 -- # set +x 00:20:13.348 07:20:46 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:13.607 [2024-02-13 07:20:47.114709] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:13.607 [2024-02-13 07:20:47.114753] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:13.607 [2024-02-13 07:20:47.126365] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4e30 00:20:13.607 [2024-02-13 07:20:47.128061] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:13.607 07:20:47 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:14.546 07:20:48 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:14.546 07:20:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:14.546 07:20:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:14.546 07:20:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:14.546 07:20:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:14.546 07:20:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:14.546 07:20:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.804 07:20:48 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:14.805 "name": "raid_bdev1", 00:20:14.805 "uuid": "453ce5c7-4d4d-4a85-b697-2a9bc536bceb", 00:20:14.805 "strip_size_kb": 0, 00:20:14.805 "state": "online", 00:20:14.805 "raid_level": "raid1", 00:20:14.805 "superblock": true, 00:20:14.805 "num_base_bdevs": 2, 00:20:14.805 "num_base_bdevs_discovered": 2, 00:20:14.805 "num_base_bdevs_operational": 2, 00:20:14.805 "process": { 00:20:14.805 "type": "rebuild", 00:20:14.805 "target": "spare", 00:20:14.805 "progress": { 00:20:14.805 "blocks": 22528, 00:20:14.805 "percent": 35 00:20:14.805 } 00:20:14.805 }, 00:20:14.805 "base_bdevs_list": [ 00:20:14.805 { 00:20:14.805 "name": "spare", 00:20:14.805 "uuid": "d9102b2e-4da3-5d38-ac2c-32ce811bbde1", 00:20:14.805 "is_configured": true, 00:20:14.805 "data_offset": 2048, 00:20:14.805 "data_size": 63488 00:20:14.805 }, 00:20:14.805 { 00:20:14.805 "name": "BaseBdev2", 00:20:14.805 "uuid": "e2aad471-e719-5009-aa78-f9b277ee712f", 00:20:14.805 "is_configured": true, 00:20:14.805 "data_offset": 2048, 00:20:14.805 "data_size": 63488 00:20:14.805 } 00:20:14.805 ] 00:20:14.805 }' 00:20:14.805 07:20:48 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:14.805 07:20:48 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:14.805 07:20:48 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:14.805 07:20:48 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:14.805 07:20:48 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:15.063 [2024-02-13 07:20:48.666498] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:15.063 [2024-02-13 07:20:48.737679] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:15.063 [2024-02-13 07:20:48.737773] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:15.322 07:20:48 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:15.322 07:20:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:15.322 07:20:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:15.322 07:20:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:15.322 07:20:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:15.322 07:20:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:15.322 07:20:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:15.322 07:20:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:15.322 07:20:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:15.322 07:20:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:15.322 07:20:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:15.322 07:20:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.322 07:20:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:15.322 "name": "raid_bdev1", 00:20:15.322 "uuid": "453ce5c7-4d4d-4a85-b697-2a9bc536bceb", 00:20:15.322 "strip_size_kb": 0, 00:20:15.322 "state": "online", 00:20:15.322 "raid_level": "raid1", 00:20:15.322 "superblock": true, 00:20:15.322 "num_base_bdevs": 2, 00:20:15.322 "num_base_bdevs_discovered": 1, 00:20:15.322 "num_base_bdevs_operational": 1, 00:20:15.322 "base_bdevs_list": [ 00:20:15.322 { 00:20:15.322 "name": null, 00:20:15.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:15.322 "is_configured": false, 00:20:15.322 "data_offset": 2048, 00:20:15.322 "data_size": 63488 00:20:15.322 }, 00:20:15.322 { 00:20:15.322 "name": "BaseBdev2", 00:20:15.322 "uuid": "e2aad471-e719-5009-aa78-f9b277ee712f", 00:20:15.322 "is_configured": true, 00:20:15.322 "data_offset": 2048, 00:20:15.322 "data_size": 63488 00:20:15.322 } 00:20:15.322 ] 00:20:15.322 }' 00:20:15.581 07:20:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:15.581 07:20:49 -- common/autotest_common.sh@10 -- # set +x 00:20:16.148 07:20:49 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:16.148 07:20:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:16.148 07:20:49 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:16.148 07:20:49 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:16.148 07:20:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:16.148 07:20:49 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:16.148 07:20:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.406 07:20:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:16.406 "name": "raid_bdev1", 00:20:16.406 "uuid": "453ce5c7-4d4d-4a85-b697-2a9bc536bceb", 00:20:16.406 "strip_size_kb": 0, 00:20:16.406 "state": "online", 00:20:16.406 "raid_level": "raid1", 00:20:16.406 "superblock": true, 00:20:16.406 "num_base_bdevs": 2, 00:20:16.406 "num_base_bdevs_discovered": 1, 00:20:16.406 "num_base_bdevs_operational": 1, 00:20:16.406 "base_bdevs_list": [ 00:20:16.406 { 00:20:16.406 "name": null, 00:20:16.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:16.406 "is_configured": false, 00:20:16.406 "data_offset": 2048, 00:20:16.406 "data_size": 63488 00:20:16.406 }, 00:20:16.406 { 00:20:16.406 "name": "BaseBdev2", 00:20:16.406 "uuid": "e2aad471-e719-5009-aa78-f9b277ee712f", 00:20:16.406 "is_configured": true, 00:20:16.406 "data_offset": 2048, 00:20:16.406 "data_size": 63488 00:20:16.406 } 00:20:16.406 ] 00:20:16.406 }' 00:20:16.406 07:20:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:16.406 07:20:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:16.406 07:20:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:16.406 07:20:50 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:16.406 07:20:50 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:16.664 [2024-02-13 07:20:50.189593] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:16.664 [2024-02-13 07:20:50.189661] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:16.664 [2024-02-13 07:20:50.201543] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca4fd0 00:20:16.664 [2024-02-13 07:20:50.203335] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:16.664 07:20:50 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:17.598 07:20:51 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:17.598 07:20:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:17.598 07:20:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:17.598 07:20:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:17.598 07:20:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:17.598 07:20:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:17.598 07:20:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.856 07:20:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:17.856 "name": "raid_bdev1", 00:20:17.856 "uuid": "453ce5c7-4d4d-4a85-b697-2a9bc536bceb", 00:20:17.856 "strip_size_kb": 0, 00:20:17.856 "state": "online", 00:20:17.856 "raid_level": "raid1", 00:20:17.856 "superblock": true, 00:20:17.856 "num_base_bdevs": 2, 00:20:17.856 "num_base_bdevs_discovered": 2, 00:20:17.856 "num_base_bdevs_operational": 2, 00:20:17.856 "process": { 00:20:17.856 "type": "rebuild", 00:20:17.856 "target": "spare", 00:20:17.856 "progress": { 00:20:17.856 "blocks": 24576, 00:20:17.856 "percent": 38 00:20:17.856 } 00:20:17.856 }, 00:20:17.856 "base_bdevs_list": [ 00:20:17.856 { 00:20:17.856 "name": "spare", 00:20:17.856 "uuid": "d9102b2e-4da3-5d38-ac2c-32ce811bbde1", 00:20:17.856 "is_configured": true, 00:20:17.856 "data_offset": 2048, 00:20:17.856 "data_size": 63488 00:20:17.856 }, 00:20:17.856 { 00:20:17.856 "name": "BaseBdev2", 00:20:17.856 "uuid": "e2aad471-e719-5009-aa78-f9b277ee712f", 00:20:17.856 "is_configured": true, 00:20:17.856 "data_offset": 2048, 00:20:17.856 "data_size": 63488 00:20:17.856 } 00:20:17.856 ] 00:20:17.856 }' 00:20:17.856 07:20:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:17.856 07:20:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:17.856 07:20:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:18.115 07:20:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.115 07:20:51 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:18.115 07:20:51 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:18.115 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:18.116 07:20:51 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:18.116 07:20:51 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:18.116 07:20:51 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:18.116 07:20:51 -- bdev/bdev_raid.sh@657 -- # local timeout=432 00:20:18.116 07:20:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:18.116 07:20:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:18.116 07:20:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:18.116 07:20:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:18.116 07:20:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:18.116 07:20:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:18.116 07:20:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:18.116 07:20:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.374 07:20:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:18.374 "name": "raid_bdev1", 00:20:18.374 "uuid": "453ce5c7-4d4d-4a85-b697-2a9bc536bceb", 00:20:18.374 "strip_size_kb": 0, 00:20:18.374 "state": "online", 00:20:18.374 "raid_level": "raid1", 00:20:18.374 "superblock": true, 00:20:18.374 "num_base_bdevs": 2, 00:20:18.374 "num_base_bdevs_discovered": 2, 00:20:18.374 "num_base_bdevs_operational": 2, 00:20:18.374 "process": { 00:20:18.374 "type": "rebuild", 00:20:18.374 "target": "spare", 00:20:18.374 "progress": { 00:20:18.374 "blocks": 32768, 00:20:18.374 "percent": 51 00:20:18.374 } 00:20:18.374 }, 00:20:18.374 "base_bdevs_list": [ 00:20:18.374 { 00:20:18.374 "name": "spare", 00:20:18.374 "uuid": "d9102b2e-4da3-5d38-ac2c-32ce811bbde1", 00:20:18.374 "is_configured": true, 00:20:18.374 "data_offset": 2048, 00:20:18.374 "data_size": 63488 00:20:18.374 }, 00:20:18.374 { 00:20:18.374 "name": "BaseBdev2", 00:20:18.374 "uuid": "e2aad471-e719-5009-aa78-f9b277ee712f", 00:20:18.374 "is_configured": true, 00:20:18.374 "data_offset": 2048, 00:20:18.374 "data_size": 63488 00:20:18.374 } 00:20:18.374 ] 00:20:18.374 }' 00:20:18.375 07:20:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:18.375 07:20:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:18.375 07:20:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:18.375 07:20:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:18.375 07:20:51 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:19.316 07:20:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:19.316 07:20:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:19.316 07:20:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:19.316 07:20:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:19.316 07:20:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:19.316 07:20:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:19.316 07:20:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:19.316 07:20:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:19.575 07:20:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:19.575 "name": "raid_bdev1", 00:20:19.575 "uuid": "453ce5c7-4d4d-4a85-b697-2a9bc536bceb", 00:20:19.575 "strip_size_kb": 0, 00:20:19.575 "state": "online", 00:20:19.575 "raid_level": "raid1", 00:20:19.575 "superblock": true, 00:20:19.575 "num_base_bdevs": 2, 00:20:19.575 "num_base_bdevs_discovered": 2, 00:20:19.575 "num_base_bdevs_operational": 2, 00:20:19.575 "process": { 00:20:19.575 "type": "rebuild", 00:20:19.575 "target": "spare", 00:20:19.575 "progress": { 00:20:19.575 "blocks": 59392, 00:20:19.575 "percent": 93 00:20:19.575 } 00:20:19.575 }, 00:20:19.575 "base_bdevs_list": [ 00:20:19.575 { 00:20:19.575 "name": "spare", 00:20:19.575 "uuid": "d9102b2e-4da3-5d38-ac2c-32ce811bbde1", 00:20:19.575 "is_configured": true, 00:20:19.575 "data_offset": 2048, 00:20:19.575 "data_size": 63488 00:20:19.575 }, 00:20:19.575 { 00:20:19.575 "name": "BaseBdev2", 00:20:19.575 "uuid": "e2aad471-e719-5009-aa78-f9b277ee712f", 00:20:19.575 "is_configured": true, 00:20:19.575 "data_offset": 2048, 00:20:19.575 "data_size": 63488 00:20:19.575 } 00:20:19.575 ] 00:20:19.575 }' 00:20:19.575 07:20:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:19.575 07:20:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:19.575 07:20:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:19.834 07:20:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:19.834 07:20:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:19.834 [2024-02-13 07:20:53.320540] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:19.834 [2024-02-13 07:20:53.320610] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:19.834 [2024-02-13 07:20:53.320781] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:20.771 07:20:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:20.771 07:20:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:20.771 07:20:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:20.771 07:20:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:20.771 07:20:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:20.771 07:20:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:20.771 07:20:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:20.771 07:20:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.029 07:20:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:21.029 "name": "raid_bdev1", 00:20:21.029 "uuid": "453ce5c7-4d4d-4a85-b697-2a9bc536bceb", 00:20:21.029 "strip_size_kb": 0, 00:20:21.029 "state": "online", 00:20:21.029 "raid_level": "raid1", 00:20:21.029 "superblock": true, 00:20:21.029 "num_base_bdevs": 2, 00:20:21.029 "num_base_bdevs_discovered": 2, 00:20:21.029 "num_base_bdevs_operational": 2, 00:20:21.029 "base_bdevs_list": [ 00:20:21.029 { 00:20:21.029 "name": "spare", 00:20:21.029 "uuid": "d9102b2e-4da3-5d38-ac2c-32ce811bbde1", 00:20:21.029 "is_configured": true, 00:20:21.029 "data_offset": 2048, 00:20:21.029 "data_size": 63488 00:20:21.029 }, 00:20:21.029 { 00:20:21.029 "name": "BaseBdev2", 00:20:21.029 "uuid": "e2aad471-e719-5009-aa78-f9b277ee712f", 00:20:21.029 "is_configured": true, 00:20:21.029 "data_offset": 2048, 00:20:21.029 "data_size": 63488 00:20:21.029 } 00:20:21.029 ] 00:20:21.029 }' 00:20:21.030 07:20:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:21.030 07:20:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:21.030 07:20:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:21.030 07:20:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:21.030 07:20:54 -- bdev/bdev_raid.sh@660 -- # break 00:20:21.030 07:20:54 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:21.030 07:20:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:21.030 07:20:54 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:21.030 07:20:54 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:21.030 07:20:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:21.030 07:20:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.030 07:20:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.288 07:20:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:21.288 "name": "raid_bdev1", 00:20:21.288 "uuid": "453ce5c7-4d4d-4a85-b697-2a9bc536bceb", 00:20:21.288 "strip_size_kb": 0, 00:20:21.288 "state": "online", 00:20:21.288 "raid_level": "raid1", 00:20:21.288 "superblock": true, 00:20:21.288 "num_base_bdevs": 2, 00:20:21.288 "num_base_bdevs_discovered": 2, 00:20:21.288 "num_base_bdevs_operational": 2, 00:20:21.288 "base_bdevs_list": [ 00:20:21.288 { 00:20:21.288 "name": "spare", 00:20:21.288 "uuid": "d9102b2e-4da3-5d38-ac2c-32ce811bbde1", 00:20:21.288 "is_configured": true, 00:20:21.288 "data_offset": 2048, 00:20:21.288 "data_size": 63488 00:20:21.288 }, 00:20:21.288 { 00:20:21.288 "name": "BaseBdev2", 00:20:21.288 "uuid": "e2aad471-e719-5009-aa78-f9b277ee712f", 00:20:21.288 "is_configured": true, 00:20:21.288 "data_offset": 2048, 00:20:21.288 "data_size": 63488 00:20:21.288 } 00:20:21.288 ] 00:20:21.288 }' 00:20:21.288 07:20:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:21.288 07:20:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:21.288 07:20:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:21.547 07:20:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:21.547 07:20:54 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:21.547 07:20:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:21.547 07:20:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:21.547 07:20:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:21.547 07:20:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:21.547 07:20:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:21.547 07:20:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:21.547 07:20:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:21.547 07:20:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:21.547 07:20:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:21.547 07:20:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:21.547 07:20:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.547 07:20:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:21.547 "name": "raid_bdev1", 00:20:21.547 "uuid": "453ce5c7-4d4d-4a85-b697-2a9bc536bceb", 00:20:21.547 "strip_size_kb": 0, 00:20:21.547 "state": "online", 00:20:21.547 "raid_level": "raid1", 00:20:21.547 "superblock": true, 00:20:21.547 "num_base_bdevs": 2, 00:20:21.547 "num_base_bdevs_discovered": 2, 00:20:21.547 "num_base_bdevs_operational": 2, 00:20:21.547 "base_bdevs_list": [ 00:20:21.547 { 00:20:21.547 "name": "spare", 00:20:21.547 "uuid": "d9102b2e-4da3-5d38-ac2c-32ce811bbde1", 00:20:21.547 "is_configured": true, 00:20:21.547 "data_offset": 2048, 00:20:21.547 "data_size": 63488 00:20:21.547 }, 00:20:21.547 { 00:20:21.547 "name": "BaseBdev2", 00:20:21.547 "uuid": "e2aad471-e719-5009-aa78-f9b277ee712f", 00:20:21.547 "is_configured": true, 00:20:21.547 "data_offset": 2048, 00:20:21.547 "data_size": 63488 00:20:21.547 } 00:20:21.547 ] 00:20:21.547 }' 00:20:21.547 07:20:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:21.547 07:20:55 -- common/autotest_common.sh@10 -- # set +x 00:20:22.484 07:20:55 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:22.484 [2024-02-13 07:20:55.992010] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:22.484 [2024-02-13 07:20:55.992047] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:22.484 [2024-02-13 07:20:55.992147] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:22.484 [2024-02-13 07:20:55.992229] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:22.484 [2024-02-13 07:20:55.992240] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:20:22.484 07:20:56 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:22.484 07:20:56 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:22.743 07:20:56 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:22.743 07:20:56 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:20:22.743 07:20:56 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:22.743 07:20:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:22.743 07:20:56 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:22.743 07:20:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:22.743 07:20:56 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:22.743 07:20:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:22.743 07:20:56 -- bdev/nbd_common.sh@12 -- # local i 00:20:22.743 07:20:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:22.743 07:20:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:22.743 07:20:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:23.002 /dev/nbd0 00:20:23.002 07:20:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:23.002 07:20:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:23.002 07:20:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:20:23.002 07:20:56 -- common/autotest_common.sh@855 -- # local i 00:20:23.002 07:20:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:23.003 07:20:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:23.003 07:20:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:20:23.003 07:20:56 -- common/autotest_common.sh@859 -- # break 00:20:23.003 07:20:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:23.003 07:20:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:23.003 07:20:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.003 1+0 records in 00:20:23.003 1+0 records out 00:20:23.003 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000947816 s, 4.3 MB/s 00:20:23.003 07:20:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.003 07:20:56 -- common/autotest_common.sh@872 -- # size=4096 00:20:23.003 07:20:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.003 07:20:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:23.003 07:20:56 -- common/autotest_common.sh@875 -- # return 0 00:20:23.003 07:20:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:23.003 07:20:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:23.003 07:20:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:20:23.262 /dev/nbd1 00:20:23.262 07:20:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:23.262 07:20:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:23.262 07:20:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:20:23.262 07:20:56 -- common/autotest_common.sh@855 -- # local i 00:20:23.262 07:20:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:23.262 07:20:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:23.262 07:20:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:20:23.262 07:20:56 -- common/autotest_common.sh@859 -- # break 00:20:23.262 07:20:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:23.262 07:20:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:23.262 07:20:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:23.262 1+0 records in 00:20:23.262 1+0 records out 00:20:23.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463669 s, 8.8 MB/s 00:20:23.262 07:20:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.262 07:20:56 -- common/autotest_common.sh@872 -- # size=4096 00:20:23.262 07:20:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:23.262 07:20:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:23.262 07:20:56 -- common/autotest_common.sh@875 -- # return 0 00:20:23.262 07:20:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:23.262 07:20:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:23.262 07:20:56 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:23.521 07:20:56 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:20:23.521 07:20:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:23.521 07:20:56 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:23.521 07:20:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:23.521 07:20:56 -- bdev/nbd_common.sh@51 -- # local i 00:20:23.521 07:20:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:23.521 07:20:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:23.521 07:20:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:23.521 07:20:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:23.521 07:20:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:23.521 07:20:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:23.521 07:20:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:23.521 07:20:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:23.521 07:20:57 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:23.780 07:20:57 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:23.780 07:20:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:23.780 07:20:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:23.780 07:20:57 -- bdev/nbd_common.sh@41 -- # break 00:20:23.780 07:20:57 -- bdev/nbd_common.sh@45 -- # return 0 00:20:23.780 07:20:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:23.780 07:20:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:24.039 07:20:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:24.039 07:20:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:24.039 07:20:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:24.039 07:20:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:24.039 07:20:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.039 07:20:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:24.039 07:20:57 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:24.039 07:20:57 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:24.039 07:20:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:24.039 07:20:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:24.039 07:20:57 -- bdev/nbd_common.sh@41 -- # break 00:20:24.039 07:20:57 -- bdev/nbd_common.sh@45 -- # return 0 00:20:24.039 07:20:57 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:20:24.039 07:20:57 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:24.039 07:20:57 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:20:24.039 07:20:57 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:20:24.319 07:20:57 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:24.603 [2024-02-13 07:20:58.088887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:24.603 [2024-02-13 07:20:58.088975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.603 [2024-02-13 07:20:58.089010] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:20:24.603 [2024-02-13 07:20:58.089036] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.603 [2024-02-13 07:20:58.091098] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.603 [2024-02-13 07:20:58.091162] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:24.603 [2024-02-13 07:20:58.091259] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:24.603 [2024-02-13 07:20:58.091318] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:24.603 BaseBdev1 00:20:24.603 07:20:58 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:20:24.603 07:20:58 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:20:24.603 07:20:58 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:20:24.603 07:20:58 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:24.861 [2024-02-13 07:20:58.432979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:24.861 [2024-02-13 07:20:58.433107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:24.861 [2024-02-13 07:20:58.433139] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:20:24.861 [2024-02-13 07:20:58.433166] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:24.861 [2024-02-13 07:20:58.433617] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:24.861 [2024-02-13 07:20:58.433677] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:24.861 [2024-02-13 07:20:58.433799] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:20:24.861 [2024-02-13 07:20:58.433814] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:20:24.861 [2024-02-13 07:20:58.433821] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:24.862 [2024-02-13 07:20:58.433848] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a580 name raid_bdev1, state configuring 00:20:24.862 [2024-02-13 07:20:58.433917] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:24.862 BaseBdev2 00:20:24.862 07:20:58 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:20:25.120 07:20:58 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:25.379 [2024-02-13 07:20:58.873120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:25.379 [2024-02-13 07:20:58.873191] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:25.379 [2024-02-13 07:20:58.873227] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:20:25.379 [2024-02-13 07:20:58.873247] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:25.379 [2024-02-13 07:20:58.873820] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:25.379 [2024-02-13 07:20:58.873900] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:25.379 [2024-02-13 07:20:58.874023] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:20:25.379 [2024-02-13 07:20:58.874094] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:25.379 spare 00:20:25.379 07:20:58 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:25.379 07:20:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:25.379 07:20:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:25.379 07:20:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:25.379 07:20:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:25.379 07:20:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:25.379 07:20:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:25.379 07:20:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:25.379 07:20:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:25.379 07:20:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:25.379 07:20:58 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:25.379 07:20:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.379 [2024-02-13 07:20:58.974230] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ab80 00:20:25.379 [2024-02-13 07:20:58.974249] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:25.379 [2024-02-13 07:20:58.974358] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5b10 00:20:25.379 [2024-02-13 07:20:58.974756] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ab80 00:20:25.379 [2024-02-13 07:20:58.974776] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ab80 00:20:25.379 [2024-02-13 07:20:58.974905] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.638 07:20:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:25.638 "name": "raid_bdev1", 00:20:25.638 "uuid": "453ce5c7-4d4d-4a85-b697-2a9bc536bceb", 00:20:25.638 "strip_size_kb": 0, 00:20:25.638 "state": "online", 00:20:25.638 "raid_level": "raid1", 00:20:25.638 "superblock": true, 00:20:25.638 "num_base_bdevs": 2, 00:20:25.638 "num_base_bdevs_discovered": 2, 00:20:25.638 "num_base_bdevs_operational": 2, 00:20:25.638 "base_bdevs_list": [ 00:20:25.638 { 00:20:25.638 "name": "spare", 00:20:25.638 "uuid": "d9102b2e-4da3-5d38-ac2c-32ce811bbde1", 00:20:25.638 "is_configured": true, 00:20:25.638 "data_offset": 2048, 00:20:25.638 "data_size": 63488 00:20:25.638 }, 00:20:25.638 { 00:20:25.638 "name": "BaseBdev2", 00:20:25.638 "uuid": "e2aad471-e719-5009-aa78-f9b277ee712f", 00:20:25.638 "is_configured": true, 00:20:25.638 "data_offset": 2048, 00:20:25.638 "data_size": 63488 00:20:25.638 } 00:20:25.638 ] 00:20:25.638 }' 00:20:25.638 07:20:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:25.638 07:20:59 -- common/autotest_common.sh@10 -- # set +x 00:20:26.205 07:20:59 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:26.205 07:20:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:26.205 07:20:59 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:26.205 07:20:59 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:26.205 07:20:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:26.205 07:20:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.205 07:20:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.464 07:21:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:26.464 "name": "raid_bdev1", 00:20:26.464 "uuid": "453ce5c7-4d4d-4a85-b697-2a9bc536bceb", 00:20:26.464 "strip_size_kb": 0, 00:20:26.464 "state": "online", 00:20:26.464 "raid_level": "raid1", 00:20:26.464 "superblock": true, 00:20:26.464 "num_base_bdevs": 2, 00:20:26.464 "num_base_bdevs_discovered": 2, 00:20:26.464 "num_base_bdevs_operational": 2, 00:20:26.464 "base_bdevs_list": [ 00:20:26.464 { 00:20:26.464 "name": "spare", 00:20:26.464 "uuid": "d9102b2e-4da3-5d38-ac2c-32ce811bbde1", 00:20:26.464 "is_configured": true, 00:20:26.464 "data_offset": 2048, 00:20:26.464 "data_size": 63488 00:20:26.464 }, 00:20:26.464 { 00:20:26.464 "name": "BaseBdev2", 00:20:26.464 "uuid": "e2aad471-e719-5009-aa78-f9b277ee712f", 00:20:26.464 "is_configured": true, 00:20:26.464 "data_offset": 2048, 00:20:26.464 "data_size": 63488 00:20:26.464 } 00:20:26.464 ] 00:20:26.464 }' 00:20:26.464 07:21:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:26.464 07:21:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:26.464 07:21:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:26.464 07:21:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:26.464 07:21:00 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:26.464 07:21:00 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:26.723 07:21:00 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:20:26.723 07:21:00 -- bdev/bdev_raid.sh@709 -- # killprocess 127873 00:20:26.723 07:21:00 -- common/autotest_common.sh@924 -- # '[' -z 127873 ']' 00:20:26.723 07:21:00 -- common/autotest_common.sh@928 -- # kill -0 127873 00:20:26.723 07:21:00 -- common/autotest_common.sh@929 -- # uname 00:20:26.723 07:21:00 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:20:26.723 07:21:00 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 127873 00:20:26.723 killing process with pid 127873 00:20:26.723 Received shutdown signal, test time was about 60.000000 seconds 00:20:26.723 00:20:26.723 Latency(us) 00:20:26.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.723 =================================================================================================================== 00:20:26.723 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:26.723 07:21:00 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:20:26.723 07:21:00 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:20:26.723 07:21:00 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 127873' 00:20:26.723 07:21:00 -- common/autotest_common.sh@943 -- # kill 127873 00:20:26.723 07:21:00 -- common/autotest_common.sh@948 -- # wait 127873 00:20:26.723 [2024-02-13 07:21:00.405026] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:26.723 [2024-02-13 07:21:00.405142] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:26.723 [2024-02-13 07:21:00.405217] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:26.723 [2024-02-13 07:21:00.405235] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ab80 name raid_bdev1, state offline 00:20:26.982 [2024-02-13 07:21:00.609773] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:27.917 ************************************ 00:20:27.917 END TEST raid_rebuild_test_sb 00:20:27.917 ************************************ 00:20:27.917 07:21:01 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:27.917 00:20:27.917 real 0m25.380s 00:20:27.917 user 0m36.944s 00:20:27.917 sys 0m3.855s 00:20:27.917 07:21:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:27.917 07:21:01 -- common/autotest_common.sh@10 -- # set +x 00:20:28.176 07:21:01 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:20:28.177 07:21:01 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:20:28.177 07:21:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:20:28.177 07:21:01 -- common/autotest_common.sh@10 -- # set +x 00:20:28.177 ************************************ 00:20:28.177 START TEST raid_rebuild_test_io 00:20:28.177 ************************************ 00:20:28.177 07:21:01 -- common/autotest_common.sh@1102 -- # raid_rebuild_test raid1 2 false true 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@544 -- # raid_pid=128538 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@545 -- # waitforlisten 128538 /var/tmp/spdk-raid.sock 00:20:28.177 07:21:01 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:28.177 07:21:01 -- common/autotest_common.sh@817 -- # '[' -z 128538 ']' 00:20:28.177 07:21:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:28.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:28.177 07:21:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:28.177 07:21:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:28.177 07:21:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:28.177 07:21:01 -- common/autotest_common.sh@10 -- # set +x 00:20:28.177 [2024-02-13 07:21:01.713132] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:20:28.177 [2024-02-13 07:21:01.713514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128538 ] 00:20:28.177 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:28.177 Zero copy mechanism will not be used. 00:20:28.436 [2024-02-13 07:21:01.878504] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.436 [2024-02-13 07:21:02.061275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.695 [2024-02-13 07:21:02.244553] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:28.954 07:21:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:28.954 07:21:02 -- common/autotest_common.sh@850 -- # return 0 00:20:28.954 07:21:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:28.954 07:21:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:28.954 07:21:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:20:29.213 BaseBdev1 00:20:29.213 07:21:02 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:29.213 07:21:02 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:20:29.213 07:21:02 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:20:29.471 BaseBdev2 00:20:29.471 07:21:03 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:29.730 spare_malloc 00:20:29.730 07:21:03 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:29.988 spare_delay 00:20:29.988 07:21:03 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:30.247 [2024-02-13 07:21:03.828039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:30.247 [2024-02-13 07:21:03.828126] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:30.247 [2024-02-13 07:21:03.828156] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:30.247 [2024-02-13 07:21:03.828204] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:30.247 [2024-02-13 07:21:03.830637] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:30.247 [2024-02-13 07:21:03.830682] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:30.247 spare 00:20:30.247 07:21:03 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:30.505 [2024-02-13 07:21:04.032170] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:30.505 [2024-02-13 07:21:04.034077] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:30.505 [2024-02-13 07:21:04.034190] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008480 00:20:30.505 [2024-02-13 07:21:04.034204] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:20:30.505 [2024-02-13 07:21:04.034354] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:20:30.505 [2024-02-13 07:21:04.034760] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008480 00:20:30.505 [2024-02-13 07:21:04.034781] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008480 00:20:30.505 [2024-02-13 07:21:04.034938] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:30.505 07:21:04 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:30.505 07:21:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:30.505 07:21:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:30.505 07:21:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:30.505 07:21:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:30.505 07:21:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:30.505 07:21:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:30.505 07:21:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:30.505 07:21:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:30.505 07:21:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:30.506 07:21:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:30.506 07:21:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.764 07:21:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:30.764 "name": "raid_bdev1", 00:20:30.764 "uuid": "8f99aba8-4642-40cd-9627-2cd461bce7c8", 00:20:30.764 "strip_size_kb": 0, 00:20:30.764 "state": "online", 00:20:30.764 "raid_level": "raid1", 00:20:30.764 "superblock": false, 00:20:30.764 "num_base_bdevs": 2, 00:20:30.764 "num_base_bdevs_discovered": 2, 00:20:30.764 "num_base_bdevs_operational": 2, 00:20:30.764 "base_bdevs_list": [ 00:20:30.764 { 00:20:30.764 "name": "BaseBdev1", 00:20:30.764 "uuid": "4e45bbcb-3c43-48f5-a220-665555f8b3bd", 00:20:30.764 "is_configured": true, 00:20:30.764 "data_offset": 0, 00:20:30.764 "data_size": 65536 00:20:30.764 }, 00:20:30.764 { 00:20:30.764 "name": "BaseBdev2", 00:20:30.764 "uuid": "8e7e1ab1-21f4-48dc-927a-6023f4b39277", 00:20:30.764 "is_configured": true, 00:20:30.764 "data_offset": 0, 00:20:30.764 "data_size": 65536 00:20:30.764 } 00:20:30.764 ] 00:20:30.764 }' 00:20:30.764 07:21:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:30.764 07:21:04 -- common/autotest_common.sh@10 -- # set +x 00:20:31.331 07:21:04 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:31.331 07:21:04 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:31.590 [2024-02-13 07:21:05.136581] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:31.590 07:21:05 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:20:31.590 07:21:05 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:31.590 07:21:05 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:31.849 07:21:05 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:20:31.849 07:21:05 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:31.849 07:21:05 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:31.849 07:21:05 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:31.849 [2024-02-13 07:21:05.447163] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:20:31.849 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:31.849 Zero copy mechanism will not be used. 00:20:31.849 Running I/O for 60 seconds... 00:20:31.849 [2024-02-13 07:21:05.534295] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:31.849 [2024-02-13 07:21:05.540243] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005790 00:20:32.108 07:21:05 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:32.108 07:21:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:32.108 07:21:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:32.108 07:21:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:32.108 07:21:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:32.108 07:21:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:32.108 07:21:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:32.108 07:21:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:32.108 07:21:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:32.108 07:21:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:32.108 07:21:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:32.108 07:21:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.367 07:21:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:32.367 "name": "raid_bdev1", 00:20:32.367 "uuid": "8f99aba8-4642-40cd-9627-2cd461bce7c8", 00:20:32.367 "strip_size_kb": 0, 00:20:32.367 "state": "online", 00:20:32.367 "raid_level": "raid1", 00:20:32.367 "superblock": false, 00:20:32.367 "num_base_bdevs": 2, 00:20:32.367 "num_base_bdevs_discovered": 1, 00:20:32.367 "num_base_bdevs_operational": 1, 00:20:32.367 "base_bdevs_list": [ 00:20:32.367 { 00:20:32.367 "name": null, 00:20:32.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:32.367 "is_configured": false, 00:20:32.367 "data_offset": 0, 00:20:32.367 "data_size": 65536 00:20:32.367 }, 00:20:32.367 { 00:20:32.367 "name": "BaseBdev2", 00:20:32.367 "uuid": "8e7e1ab1-21f4-48dc-927a-6023f4b39277", 00:20:32.367 "is_configured": true, 00:20:32.367 "data_offset": 0, 00:20:32.367 "data_size": 65536 00:20:32.367 } 00:20:32.367 ] 00:20:32.367 }' 00:20:32.367 07:21:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:32.367 07:21:05 -- common/autotest_common.sh@10 -- # set +x 00:20:32.935 07:21:06 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:33.193 [2024-02-13 07:21:06.730630] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:33.193 [2024-02-13 07:21:06.730700] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:33.193 07:21:06 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:33.193 [2024-02-13 07:21:06.802979] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:33.193 [2024-02-13 07:21:06.805083] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:33.452 [2024-02-13 07:21:06.920451] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:33.452 [2024-02-13 07:21:06.920910] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:33.711 [2024-02-13 07:21:07.168031] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:33.711 [2024-02-13 07:21:07.168273] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:33.969 [2024-02-13 07:21:07.511224] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:33.969 [2024-02-13 07:21:07.613307] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:33.969 [2024-02-13 07:21:07.613513] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:34.227 07:21:07 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:34.227 07:21:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:34.227 07:21:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:34.227 07:21:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:34.227 07:21:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:34.227 07:21:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:34.227 07:21:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.227 [2024-02-13 07:21:07.851957] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:34.485 07:21:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:34.485 "name": "raid_bdev1", 00:20:34.485 "uuid": "8f99aba8-4642-40cd-9627-2cd461bce7c8", 00:20:34.485 "strip_size_kb": 0, 00:20:34.485 "state": "online", 00:20:34.485 "raid_level": "raid1", 00:20:34.485 "superblock": false, 00:20:34.485 "num_base_bdevs": 2, 00:20:34.485 "num_base_bdevs_discovered": 2, 00:20:34.485 "num_base_bdevs_operational": 2, 00:20:34.485 "process": { 00:20:34.485 "type": "rebuild", 00:20:34.485 "target": "spare", 00:20:34.485 "progress": { 00:20:34.485 "blocks": 14336, 00:20:34.485 "percent": 21 00:20:34.485 } 00:20:34.485 }, 00:20:34.485 "base_bdevs_list": [ 00:20:34.485 { 00:20:34.485 "name": "spare", 00:20:34.485 "uuid": "7e3100fb-cfa2-56e9-a845-647b80e0778a", 00:20:34.485 "is_configured": true, 00:20:34.485 "data_offset": 0, 00:20:34.485 "data_size": 65536 00:20:34.485 }, 00:20:34.485 { 00:20:34.485 "name": "BaseBdev2", 00:20:34.485 "uuid": "8e7e1ab1-21f4-48dc-927a-6023f4b39277", 00:20:34.485 "is_configured": true, 00:20:34.485 "data_offset": 0, 00:20:34.485 "data_size": 65536 00:20:34.485 } 00:20:34.485 ] 00:20:34.485 }' 00:20:34.485 07:21:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:34.485 [2024-02-13 07:21:08.059838] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:34.485 [2024-02-13 07:21:08.060251] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:34.485 07:21:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:34.485 07:21:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:34.485 07:21:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:34.486 07:21:08 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:34.749 [2024-02-13 07:21:08.359899] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:34.749 [2024-02-13 07:21:08.405735] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:35.014 [2024-02-13 07:21:08.512510] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:35.014 [2024-02-13 07:21:08.520504] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:35.014 [2024-02-13 07:21:08.552525] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005790 00:20:35.014 07:21:08 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:35.014 07:21:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:35.014 07:21:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:35.014 07:21:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:35.014 07:21:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:35.014 07:21:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:35.014 07:21:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:35.014 07:21:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:35.014 07:21:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:35.014 07:21:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:35.014 07:21:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.014 07:21:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:35.272 07:21:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:35.272 "name": "raid_bdev1", 00:20:35.272 "uuid": "8f99aba8-4642-40cd-9627-2cd461bce7c8", 00:20:35.272 "strip_size_kb": 0, 00:20:35.272 "state": "online", 00:20:35.272 "raid_level": "raid1", 00:20:35.272 "superblock": false, 00:20:35.272 "num_base_bdevs": 2, 00:20:35.272 "num_base_bdevs_discovered": 1, 00:20:35.272 "num_base_bdevs_operational": 1, 00:20:35.272 "base_bdevs_list": [ 00:20:35.272 { 00:20:35.272 "name": null, 00:20:35.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:35.272 "is_configured": false, 00:20:35.272 "data_offset": 0, 00:20:35.272 "data_size": 65536 00:20:35.272 }, 00:20:35.272 { 00:20:35.272 "name": "BaseBdev2", 00:20:35.272 "uuid": "8e7e1ab1-21f4-48dc-927a-6023f4b39277", 00:20:35.272 "is_configured": true, 00:20:35.272 "data_offset": 0, 00:20:35.272 "data_size": 65536 00:20:35.272 } 00:20:35.272 ] 00:20:35.272 }' 00:20:35.272 07:21:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:35.272 07:21:08 -- common/autotest_common.sh@10 -- # set +x 00:20:35.839 07:21:09 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:35.839 07:21:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:35.839 07:21:09 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:35.839 07:21:09 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:35.839 07:21:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:35.839 07:21:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:35.839 07:21:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.097 07:21:09 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:36.097 "name": "raid_bdev1", 00:20:36.097 "uuid": "8f99aba8-4642-40cd-9627-2cd461bce7c8", 00:20:36.097 "strip_size_kb": 0, 00:20:36.097 "state": "online", 00:20:36.097 "raid_level": "raid1", 00:20:36.097 "superblock": false, 00:20:36.097 "num_base_bdevs": 2, 00:20:36.097 "num_base_bdevs_discovered": 1, 00:20:36.097 "num_base_bdevs_operational": 1, 00:20:36.097 "base_bdevs_list": [ 00:20:36.097 { 00:20:36.097 "name": null, 00:20:36.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.097 "is_configured": false, 00:20:36.097 "data_offset": 0, 00:20:36.097 "data_size": 65536 00:20:36.097 }, 00:20:36.097 { 00:20:36.097 "name": "BaseBdev2", 00:20:36.097 "uuid": "8e7e1ab1-21f4-48dc-927a-6023f4b39277", 00:20:36.097 "is_configured": true, 00:20:36.097 "data_offset": 0, 00:20:36.097 "data_size": 65536 00:20:36.097 } 00:20:36.097 ] 00:20:36.097 }' 00:20:36.097 07:21:09 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:36.097 07:21:09 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:36.097 07:21:09 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:36.356 07:21:09 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:36.357 07:21:09 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:36.615 [2024-02-13 07:21:10.069545] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:36.615 [2024-02-13 07:21:10.069650] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:36.615 07:21:10 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:36.615 [2024-02-13 07:21:10.117916] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:20:36.615 [2024-02-13 07:21:10.120190] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:36.615 [2024-02-13 07:21:10.242864] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:36.615 [2024-02-13 07:21:10.243284] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:36.873 [2024-02-13 07:21:10.477171] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:36.873 [2024-02-13 07:21:10.477393] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:37.131 [2024-02-13 07:21:10.808579] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:37.131 [2024-02-13 07:21:10.809014] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:37.391 [2024-02-13 07:21:11.010701] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:37.391 [2024-02-13 07:21:11.010900] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:37.649 07:21:11 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.649 07:21:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:37.649 07:21:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:37.650 07:21:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:37.650 07:21:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:37.650 07:21:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.650 07:21:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.650 [2024-02-13 07:21:11.342767] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:37.909 "name": "raid_bdev1", 00:20:37.909 "uuid": "8f99aba8-4642-40cd-9627-2cd461bce7c8", 00:20:37.909 "strip_size_kb": 0, 00:20:37.909 "state": "online", 00:20:37.909 "raid_level": "raid1", 00:20:37.909 "superblock": false, 00:20:37.909 "num_base_bdevs": 2, 00:20:37.909 "num_base_bdevs_discovered": 2, 00:20:37.909 "num_base_bdevs_operational": 2, 00:20:37.909 "process": { 00:20:37.909 "type": "rebuild", 00:20:37.909 "target": "spare", 00:20:37.909 "progress": { 00:20:37.909 "blocks": 12288, 00:20:37.909 "percent": 18 00:20:37.909 } 00:20:37.909 }, 00:20:37.909 "base_bdevs_list": [ 00:20:37.909 { 00:20:37.909 "name": "spare", 00:20:37.909 "uuid": "7e3100fb-cfa2-56e9-a845-647b80e0778a", 00:20:37.909 "is_configured": true, 00:20:37.909 "data_offset": 0, 00:20:37.909 "data_size": 65536 00:20:37.909 }, 00:20:37.909 { 00:20:37.909 "name": "BaseBdev2", 00:20:37.909 "uuid": "8e7e1ab1-21f4-48dc-927a-6023f4b39277", 00:20:37.909 "is_configured": true, 00:20:37.909 "data_offset": 0, 00:20:37.909 "data_size": 65536 00:20:37.909 } 00:20:37.909 ] 00:20:37.909 }' 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@657 -- # local timeout=452 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:37.909 07:21:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.168 07:21:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:38.168 "name": "raid_bdev1", 00:20:38.168 "uuid": "8f99aba8-4642-40cd-9627-2cd461bce7c8", 00:20:38.168 "strip_size_kb": 0, 00:20:38.168 "state": "online", 00:20:38.168 "raid_level": "raid1", 00:20:38.168 "superblock": false, 00:20:38.168 "num_base_bdevs": 2, 00:20:38.168 "num_base_bdevs_discovered": 2, 00:20:38.168 "num_base_bdevs_operational": 2, 00:20:38.168 "process": { 00:20:38.168 "type": "rebuild", 00:20:38.168 "target": "spare", 00:20:38.168 "progress": { 00:20:38.168 "blocks": 18432, 00:20:38.168 "percent": 28 00:20:38.168 } 00:20:38.168 }, 00:20:38.168 "base_bdevs_list": [ 00:20:38.168 { 00:20:38.168 "name": "spare", 00:20:38.168 "uuid": "7e3100fb-cfa2-56e9-a845-647b80e0778a", 00:20:38.168 "is_configured": true, 00:20:38.168 "data_offset": 0, 00:20:38.168 "data_size": 65536 00:20:38.168 }, 00:20:38.168 { 00:20:38.168 "name": "BaseBdev2", 00:20:38.168 "uuid": "8e7e1ab1-21f4-48dc-927a-6023f4b39277", 00:20:38.168 "is_configured": true, 00:20:38.168 "data_offset": 0, 00:20:38.168 "data_size": 65536 00:20:38.168 } 00:20:38.168 ] 00:20:38.168 }' 00:20:38.168 07:21:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:38.168 [2024-02-13 07:21:11.703143] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:38.168 07:21:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:38.168 07:21:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:38.168 07:21:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:38.168 07:21:11 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:38.427 [2024-02-13 07:21:11.917977] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:38.685 [2024-02-13 07:21:12.152089] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:38.685 [2024-02-13 07:21:12.361316] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:38.685 [2024-02-13 07:21:12.361689] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:39.251 07:21:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:39.251 07:21:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:39.251 07:21:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:39.251 07:21:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:39.251 07:21:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:39.251 07:21:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:39.251 07:21:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:39.251 07:21:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:39.509 07:21:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:39.509 "name": "raid_bdev1", 00:20:39.509 "uuid": "8f99aba8-4642-40cd-9627-2cd461bce7c8", 00:20:39.509 "strip_size_kb": 0, 00:20:39.509 "state": "online", 00:20:39.509 "raid_level": "raid1", 00:20:39.509 "superblock": false, 00:20:39.509 "num_base_bdevs": 2, 00:20:39.509 "num_base_bdevs_discovered": 2, 00:20:39.509 "num_base_bdevs_operational": 2, 00:20:39.509 "process": { 00:20:39.509 "type": "rebuild", 00:20:39.509 "target": "spare", 00:20:39.509 "progress": { 00:20:39.509 "blocks": 36864, 00:20:39.509 "percent": 56 00:20:39.509 } 00:20:39.509 }, 00:20:39.509 "base_bdevs_list": [ 00:20:39.509 { 00:20:39.509 "name": "spare", 00:20:39.509 "uuid": "7e3100fb-cfa2-56e9-a845-647b80e0778a", 00:20:39.509 "is_configured": true, 00:20:39.509 "data_offset": 0, 00:20:39.509 "data_size": 65536 00:20:39.509 }, 00:20:39.509 { 00:20:39.509 "name": "BaseBdev2", 00:20:39.509 "uuid": "8e7e1ab1-21f4-48dc-927a-6023f4b39277", 00:20:39.509 "is_configured": true, 00:20:39.509 "data_offset": 0, 00:20:39.509 "data_size": 65536 00:20:39.509 } 00:20:39.509 ] 00:20:39.509 }' 00:20:39.509 07:21:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:39.509 07:21:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:39.509 07:21:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:39.509 07:21:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:39.509 07:21:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:39.509 [2024-02-13 07:21:13.174352] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:40.447 [2024-02-13 07:21:13.825039] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:20:40.447 [2024-02-13 07:21:14.055960] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:20:40.705 07:21:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:40.705 07:21:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:40.705 07:21:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:40.705 07:21:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:40.705 07:21:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:40.705 07:21:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:40.705 07:21:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:40.705 07:21:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.705 [2024-02-13 07:21:14.280334] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:20:40.964 07:21:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:40.964 "name": "raid_bdev1", 00:20:40.964 "uuid": "8f99aba8-4642-40cd-9627-2cd461bce7c8", 00:20:40.964 "strip_size_kb": 0, 00:20:40.964 "state": "online", 00:20:40.964 "raid_level": "raid1", 00:20:40.964 "superblock": false, 00:20:40.964 "num_base_bdevs": 2, 00:20:40.964 "num_base_bdevs_discovered": 2, 00:20:40.964 "num_base_bdevs_operational": 2, 00:20:40.964 "process": { 00:20:40.964 "type": "rebuild", 00:20:40.964 "target": "spare", 00:20:40.964 "progress": { 00:20:40.964 "blocks": 57344, 00:20:40.964 "percent": 87 00:20:40.964 } 00:20:40.964 }, 00:20:40.964 "base_bdevs_list": [ 00:20:40.964 { 00:20:40.964 "name": "spare", 00:20:40.964 "uuid": "7e3100fb-cfa2-56e9-a845-647b80e0778a", 00:20:40.964 "is_configured": true, 00:20:40.964 "data_offset": 0, 00:20:40.964 "data_size": 65536 00:20:40.964 }, 00:20:40.964 { 00:20:40.964 "name": "BaseBdev2", 00:20:40.964 "uuid": "8e7e1ab1-21f4-48dc-927a-6023f4b39277", 00:20:40.964 "is_configured": true, 00:20:40.964 "data_offset": 0, 00:20:40.964 "data_size": 65536 00:20:40.964 } 00:20:40.964 ] 00:20:40.964 }' 00:20:40.964 07:21:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:40.964 07:21:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:40.964 07:21:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:40.964 07:21:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:40.964 07:21:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:41.223 [2024-02-13 07:21:14.824578] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:41.482 [2024-02-13 07:21:14.930996] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:41.482 [2024-02-13 07:21:14.933677] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:42.048 07:21:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:42.049 07:21:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:42.049 07:21:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:42.049 07:21:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:42.049 07:21:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:42.049 07:21:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:42.049 07:21:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.049 07:21:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.307 07:21:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:42.307 "name": "raid_bdev1", 00:20:42.307 "uuid": "8f99aba8-4642-40cd-9627-2cd461bce7c8", 00:20:42.307 "strip_size_kb": 0, 00:20:42.307 "state": "online", 00:20:42.307 "raid_level": "raid1", 00:20:42.307 "superblock": false, 00:20:42.307 "num_base_bdevs": 2, 00:20:42.307 "num_base_bdevs_discovered": 2, 00:20:42.307 "num_base_bdevs_operational": 2, 00:20:42.307 "base_bdevs_list": [ 00:20:42.307 { 00:20:42.307 "name": "spare", 00:20:42.307 "uuid": "7e3100fb-cfa2-56e9-a845-647b80e0778a", 00:20:42.307 "is_configured": true, 00:20:42.307 "data_offset": 0, 00:20:42.307 "data_size": 65536 00:20:42.307 }, 00:20:42.307 { 00:20:42.307 "name": "BaseBdev2", 00:20:42.307 "uuid": "8e7e1ab1-21f4-48dc-927a-6023f4b39277", 00:20:42.307 "is_configured": true, 00:20:42.307 "data_offset": 0, 00:20:42.307 "data_size": 65536 00:20:42.307 } 00:20:42.307 ] 00:20:42.307 }' 00:20:42.307 07:21:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:42.307 07:21:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:42.307 07:21:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:42.307 07:21:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:20:42.307 07:21:15 -- bdev/bdev_raid.sh@660 -- # break 00:20:42.307 07:21:15 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:42.307 07:21:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:42.307 07:21:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:42.307 07:21:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:42.307 07:21:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:42.307 07:21:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.307 07:21:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.566 07:21:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:42.566 "name": "raid_bdev1", 00:20:42.566 "uuid": "8f99aba8-4642-40cd-9627-2cd461bce7c8", 00:20:42.566 "strip_size_kb": 0, 00:20:42.566 "state": "online", 00:20:42.566 "raid_level": "raid1", 00:20:42.566 "superblock": false, 00:20:42.566 "num_base_bdevs": 2, 00:20:42.566 "num_base_bdevs_discovered": 2, 00:20:42.566 "num_base_bdevs_operational": 2, 00:20:42.566 "base_bdevs_list": [ 00:20:42.566 { 00:20:42.566 "name": "spare", 00:20:42.566 "uuid": "7e3100fb-cfa2-56e9-a845-647b80e0778a", 00:20:42.566 "is_configured": true, 00:20:42.566 "data_offset": 0, 00:20:42.566 "data_size": 65536 00:20:42.566 }, 00:20:42.566 { 00:20:42.566 "name": "BaseBdev2", 00:20:42.566 "uuid": "8e7e1ab1-21f4-48dc-927a-6023f4b39277", 00:20:42.566 "is_configured": true, 00:20:42.566 "data_offset": 0, 00:20:42.566 "data_size": 65536 00:20:42.566 } 00:20:42.566 ] 00:20:42.566 }' 00:20:42.566 07:21:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:42.566 07:21:16 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:42.566 07:21:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:42.827 07:21:16 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:42.828 07:21:16 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:42.828 07:21:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:42.828 07:21:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:42.828 07:21:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:42.828 07:21:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:42.828 07:21:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:42.828 07:21:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:42.828 07:21:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:42.828 07:21:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:42.828 07:21:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:42.828 07:21:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:42.828 07:21:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:42.828 07:21:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:42.828 "name": "raid_bdev1", 00:20:42.828 "uuid": "8f99aba8-4642-40cd-9627-2cd461bce7c8", 00:20:42.828 "strip_size_kb": 0, 00:20:42.828 "state": "online", 00:20:42.828 "raid_level": "raid1", 00:20:42.828 "superblock": false, 00:20:42.828 "num_base_bdevs": 2, 00:20:42.828 "num_base_bdevs_discovered": 2, 00:20:42.828 "num_base_bdevs_operational": 2, 00:20:42.828 "base_bdevs_list": [ 00:20:42.828 { 00:20:42.828 "name": "spare", 00:20:42.828 "uuid": "7e3100fb-cfa2-56e9-a845-647b80e0778a", 00:20:42.828 "is_configured": true, 00:20:42.828 "data_offset": 0, 00:20:42.828 "data_size": 65536 00:20:42.828 }, 00:20:42.828 { 00:20:42.828 "name": "BaseBdev2", 00:20:42.828 "uuid": "8e7e1ab1-21f4-48dc-927a-6023f4b39277", 00:20:42.828 "is_configured": true, 00:20:42.828 "data_offset": 0, 00:20:42.828 "data_size": 65536 00:20:42.828 } 00:20:42.828 ] 00:20:42.828 }' 00:20:42.828 07:21:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:42.828 07:21:16 -- common/autotest_common.sh@10 -- # set +x 00:20:43.764 07:21:17 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:20:43.764 [2024-02-13 07:21:17.422190] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:43.764 [2024-02-13 07:21:17.422260] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:44.023 00:20:44.023 Latency(us) 00:20:44.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.023 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:20:44.023 raid_bdev1 : 12.07 103.24 309.72 0.00 0.00 13204.88 303.48 115819.99 00:20:44.023 =================================================================================================================== 00:20:44.023 Total : 103.24 309.72 0.00 0.00 13204.88 303.48 115819.99 00:20:44.023 [2024-02-13 07:21:17.533126] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.023 [2024-02-13 07:21:17.533183] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:44.023 [2024-02-13 07:21:17.533279] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:44.023 [2024-02-13 07:21:17.533293] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008480 name raid_bdev1, state offline 00:20:44.023 0 00:20:44.023 07:21:17 -- bdev/bdev_raid.sh@671 -- # jq length 00:20:44.023 07:21:17 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:44.282 07:21:17 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:20:44.282 07:21:17 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:20:44.282 07:21:17 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:20:44.282 07:21:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:44.282 07:21:17 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:44.282 07:21:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:44.282 07:21:17 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:44.282 07:21:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:44.282 07:21:17 -- bdev/nbd_common.sh@12 -- # local i 00:20:44.282 07:21:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:44.282 07:21:17 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:44.282 07:21:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:20:44.541 /dev/nbd0 00:20:44.541 07:21:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:44.541 07:21:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:44.541 07:21:18 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:20:44.541 07:21:18 -- common/autotest_common.sh@855 -- # local i 00:20:44.541 07:21:18 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:44.541 07:21:18 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:44.541 07:21:18 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:20:44.541 07:21:18 -- common/autotest_common.sh@859 -- # break 00:20:44.541 07:21:18 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:44.541 07:21:18 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:44.541 07:21:18 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:44.541 1+0 records in 00:20:44.541 1+0 records out 00:20:44.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482467 s, 8.5 MB/s 00:20:44.541 07:21:18 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.541 07:21:18 -- common/autotest_common.sh@872 -- # size=4096 00:20:44.541 07:21:18 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.541 07:21:18 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:44.541 07:21:18 -- common/autotest_common.sh@875 -- # return 0 00:20:44.541 07:21:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:44.541 07:21:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:44.541 07:21:18 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:20:44.541 07:21:18 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:20:44.541 07:21:18 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:20:44.541 07:21:18 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:44.541 07:21:18 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:20:44.541 07:21:18 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:44.541 07:21:18 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:20:44.541 07:21:18 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:44.541 07:21:18 -- bdev/nbd_common.sh@12 -- # local i 00:20:44.541 07:21:18 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:44.541 07:21:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:44.541 07:21:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:20:44.800 /dev/nbd1 00:20:44.800 07:21:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:44.800 07:21:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:44.800 07:21:18 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:20:44.800 07:21:18 -- common/autotest_common.sh@855 -- # local i 00:20:44.800 07:21:18 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:20:44.800 07:21:18 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:20:44.800 07:21:18 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:20:44.800 07:21:18 -- common/autotest_common.sh@859 -- # break 00:20:44.800 07:21:18 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:20:44.800 07:21:18 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:20:44.800 07:21:18 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:44.800 1+0 records in 00:20:44.800 1+0 records out 00:20:44.800 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363197 s, 11.3 MB/s 00:20:44.800 07:21:18 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.800 07:21:18 -- common/autotest_common.sh@872 -- # size=4096 00:20:44.800 07:21:18 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:44.800 07:21:18 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:20:44.800 07:21:18 -- common/autotest_common.sh@875 -- # return 0 00:20:44.800 07:21:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:44.800 07:21:18 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:44.800 07:21:18 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:20:45.058 07:21:18 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:20:45.058 07:21:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:45.058 07:21:18 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:45.058 07:21:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:45.058 07:21:18 -- bdev/nbd_common.sh@51 -- # local i 00:20:45.058 07:21:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:45.058 07:21:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:20:45.058 07:21:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:45.058 07:21:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:45.058 07:21:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:45.058 07:21:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:45.058 07:21:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:45.058 07:21:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:45.315 07:21:18 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:45.315 07:21:18 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:45.315 07:21:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:45.315 07:21:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:45.315 07:21:18 -- bdev/nbd_common.sh@41 -- # break 00:20:45.315 07:21:18 -- bdev/nbd_common.sh@45 -- # return 0 00:20:45.315 07:21:18 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:20:45.315 07:21:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:20:45.315 07:21:18 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:20:45.315 07:21:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:45.315 07:21:18 -- bdev/nbd_common.sh@51 -- # local i 00:20:45.315 07:21:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:45.315 07:21:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:20:45.574 07:21:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:45.574 07:21:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:45.574 07:21:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:45.574 07:21:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:45.574 07:21:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:45.574 07:21:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:45.574 07:21:19 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:20:45.574 07:21:19 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:20:45.574 07:21:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:45.574 07:21:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:45.574 07:21:19 -- bdev/nbd_common.sh@41 -- # break 00:20:45.574 07:21:19 -- bdev/nbd_common.sh@45 -- # return 0 00:20:45.574 07:21:19 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:20:45.574 07:21:19 -- bdev/bdev_raid.sh@709 -- # killprocess 128538 00:20:45.574 07:21:19 -- common/autotest_common.sh@924 -- # '[' -z 128538 ']' 00:20:45.574 07:21:19 -- common/autotest_common.sh@928 -- # kill -0 128538 00:20:45.574 07:21:19 -- common/autotest_common.sh@929 -- # uname 00:20:45.574 07:21:19 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:20:45.574 07:21:19 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 128538 00:20:45.574 07:21:19 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:20:45.574 07:21:19 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:20:45.574 07:21:19 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 128538' 00:20:45.574 killing process with pid 128538 00:20:45.574 07:21:19 -- common/autotest_common.sh@943 -- # kill 128538 00:20:45.574 07:21:19 -- common/autotest_common.sh@948 -- # wait 128538 00:20:45.574 Received shutdown signal, test time was about 13.817382 seconds 00:20:45.574 00:20:45.574 Latency(us) 00:20:45.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:45.574 =================================================================================================================== 00:20:45.574 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:45.574 [2024-02-13 07:21:19.266405] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:45.833 [2024-02-13 07:21:19.434092] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:47.218 ************************************ 00:20:47.218 END TEST raid_rebuild_test_io 00:20:47.218 ************************************ 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@711 -- # return 0 00:20:47.218 00:20:47.218 real 0m18.824s 00:20:47.218 user 0m28.903s 00:20:47.218 sys 0m1.940s 00:20:47.218 07:21:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:47.218 07:21:20 -- common/autotest_common.sh@10 -- # set +x 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:20:47.218 07:21:20 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:20:47.218 07:21:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:20:47.218 07:21:20 -- common/autotest_common.sh@10 -- # set +x 00:20:47.218 ************************************ 00:20:47.218 START TEST raid_rebuild_test_sb_io 00:20:47.218 ************************************ 00:20:47.218 07:21:20 -- common/autotest_common.sh@1102 -- # raid_rebuild_test raid1 2 true true 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@544 -- # raid_pid=129067 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@545 -- # waitforlisten 129067 /var/tmp/spdk-raid.sock 00:20:47.218 07:21:20 -- common/autotest_common.sh@817 -- # '[' -z 129067 ']' 00:20:47.218 07:21:20 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:47.218 07:21:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:20:47.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:20:47.218 07:21:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:47.218 07:21:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:20:47.218 07:21:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:47.218 07:21:20 -- common/autotest_common.sh@10 -- # set +x 00:20:47.218 [2024-02-13 07:21:20.607463] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:20:47.218 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:47.218 Zero copy mechanism will not be used. 00:20:47.218 [2024-02-13 07:21:20.607679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129067 ] 00:20:47.218 [2024-02-13 07:21:20.772603] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.479 [2024-02-13 07:21:20.973427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:47.479 [2024-02-13 07:21:21.158476] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:48.045 07:21:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:48.045 07:21:21 -- common/autotest_common.sh@850 -- # return 0 00:20:48.045 07:21:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:48.045 07:21:21 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:48.045 07:21:21 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:20:48.304 BaseBdev1_malloc 00:20:48.304 07:21:21 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:48.304 [2024-02-13 07:21:21.989070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:48.304 [2024-02-13 07:21:21.989188] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.304 [2024-02-13 07:21:21.989222] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:20:48.304 [2024-02-13 07:21:21.989261] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.304 [2024-02-13 07:21:21.991388] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.304 [2024-02-13 07:21:21.991453] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:48.304 BaseBdev1 00:20:48.563 07:21:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:20:48.563 07:21:21 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:20:48.563 07:21:21 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:20:48.563 BaseBdev2_malloc 00:20:48.563 07:21:22 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:48.822 [2024-02-13 07:21:22.444422] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:48.822 [2024-02-13 07:21:22.444532] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.822 [2024-02-13 07:21:22.444576] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:20:48.822 [2024-02-13 07:21:22.444630] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.822 [2024-02-13 07:21:22.446624] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.822 [2024-02-13 07:21:22.446687] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:48.822 BaseBdev2 00:20:48.822 07:21:22 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:20:49.080 spare_malloc 00:20:49.080 07:21:22 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:49.339 spare_delay 00:20:49.339 07:21:22 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:20:49.597 [2024-02-13 07:21:23.105765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:49.597 [2024-02-13 07:21:23.105881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:49.597 [2024-02-13 07:21:23.105926] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:49.597 [2024-02-13 07:21:23.105970] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:49.597 [2024-02-13 07:21:23.108342] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:49.597 [2024-02-13 07:21:23.108428] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:49.597 spare 00:20:49.597 07:21:23 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:20:49.857 [2024-02-13 07:21:23.305954] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:49.857 [2024-02-13 07:21:23.307887] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:49.857 [2024-02-13 07:21:23.308160] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:20:49.857 [2024-02-13 07:21:23.308201] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:20:49.857 [2024-02-13 07:21:23.308350] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:20:49.857 [2024-02-13 07:21:23.308747] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:20:49.857 [2024-02-13 07:21:23.308777] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:20:49.857 [2024-02-13 07:21:23.308960] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:49.857 07:21:23 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:49.857 07:21:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:49.857 07:21:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:49.857 07:21:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:49.857 07:21:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:49.857 07:21:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:20:49.857 07:21:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:49.857 07:21:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:49.857 07:21:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:49.857 07:21:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:49.857 07:21:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:49.857 07:21:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.857 07:21:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:49.857 "name": "raid_bdev1", 00:20:49.857 "uuid": "530459ea-64ad-4e26-a2cb-51042943b4ca", 00:20:49.857 "strip_size_kb": 0, 00:20:49.857 "state": "online", 00:20:49.857 "raid_level": "raid1", 00:20:49.857 "superblock": true, 00:20:49.857 "num_base_bdevs": 2, 00:20:49.857 "num_base_bdevs_discovered": 2, 00:20:49.857 "num_base_bdevs_operational": 2, 00:20:49.857 "base_bdevs_list": [ 00:20:49.857 { 00:20:49.857 "name": "BaseBdev1", 00:20:49.857 "uuid": "dada4fcf-5b65-5996-86bf-5d9c079304f4", 00:20:49.857 "is_configured": true, 00:20:49.857 "data_offset": 2048, 00:20:49.857 "data_size": 63488 00:20:49.857 }, 00:20:49.857 { 00:20:49.857 "name": "BaseBdev2", 00:20:49.857 "uuid": "1eab7d23-0a38-5eff-b0d6-e588d269f121", 00:20:49.857 "is_configured": true, 00:20:49.857 "data_offset": 2048, 00:20:49.857 "data_size": 63488 00:20:49.857 } 00:20:49.857 ] 00:20:49.857 }' 00:20:49.857 07:21:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:49.857 07:21:23 -- common/autotest_common.sh@10 -- # set +x 00:20:50.799 07:21:24 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:20:50.799 07:21:24 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:20:50.799 [2024-02-13 07:21:24.386353] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:50.799 07:21:24 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:20:50.799 07:21:24 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:50.799 07:21:24 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:51.058 07:21:24 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:20:51.058 07:21:24 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:20:51.058 07:21:24 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:20:51.058 07:21:24 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:20:51.058 [2024-02-13 07:21:24.725016] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:20:51.058 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:51.058 Zero copy mechanism will not be used. 00:20:51.058 Running I/O for 60 seconds... 00:20:51.316 [2024-02-13 07:21:24.814249] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:51.317 [2024-02-13 07:21:24.827175] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:20:51.317 07:21:24 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:51.317 07:21:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:51.317 07:21:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:51.317 07:21:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:51.317 07:21:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:51.317 07:21:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:51.317 07:21:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:51.317 07:21:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:51.317 07:21:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:51.317 07:21:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:51.317 07:21:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:51.317 07:21:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.576 07:21:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:51.576 "name": "raid_bdev1", 00:20:51.576 "uuid": "530459ea-64ad-4e26-a2cb-51042943b4ca", 00:20:51.576 "strip_size_kb": 0, 00:20:51.576 "state": "online", 00:20:51.576 "raid_level": "raid1", 00:20:51.576 "superblock": true, 00:20:51.576 "num_base_bdevs": 2, 00:20:51.576 "num_base_bdevs_discovered": 1, 00:20:51.576 "num_base_bdevs_operational": 1, 00:20:51.576 "base_bdevs_list": [ 00:20:51.576 { 00:20:51.576 "name": null, 00:20:51.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.576 "is_configured": false, 00:20:51.576 "data_offset": 2048, 00:20:51.576 "data_size": 63488 00:20:51.576 }, 00:20:51.576 { 00:20:51.576 "name": "BaseBdev2", 00:20:51.576 "uuid": "1eab7d23-0a38-5eff-b0d6-e588d269f121", 00:20:51.576 "is_configured": true, 00:20:51.576 "data_offset": 2048, 00:20:51.576 "data_size": 63488 00:20:51.576 } 00:20:51.576 ] 00:20:51.576 }' 00:20:51.576 07:21:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:51.576 07:21:25 -- common/autotest_common.sh@10 -- # set +x 00:20:52.144 07:21:25 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:52.402 [2024-02-13 07:21:26.059226] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:52.402 [2024-02-13 07:21:26.059295] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:52.402 [2024-02-13 07:21:26.094750] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:20:52.660 [2024-02-13 07:21:26.096958] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:52.660 07:21:26 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:20:52.660 [2024-02-13 07:21:26.205012] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:52.660 [2024-02-13 07:21:26.205577] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:52.920 [2024-02-13 07:21:26.428072] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:52.920 [2024-02-13 07:21:26.428412] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:53.179 [2024-02-13 07:21:26.787621] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:53.438 [2024-02-13 07:21:27.018593] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:53.438 [2024-02-13 07:21:27.018953] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:53.438 07:21:27 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:53.438 07:21:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:53.438 07:21:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:53.438 07:21:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:53.438 07:21:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:53.438 07:21:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:53.438 07:21:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:53.697 [2024-02-13 07:21:27.345415] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:53.956 07:21:27 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:53.956 "name": "raid_bdev1", 00:20:53.956 "uuid": "530459ea-64ad-4e26-a2cb-51042943b4ca", 00:20:53.956 "strip_size_kb": 0, 00:20:53.956 "state": "online", 00:20:53.956 "raid_level": "raid1", 00:20:53.956 "superblock": true, 00:20:53.956 "num_base_bdevs": 2, 00:20:53.956 "num_base_bdevs_discovered": 2, 00:20:53.956 "num_base_bdevs_operational": 2, 00:20:53.956 "process": { 00:20:53.956 "type": "rebuild", 00:20:53.956 "target": "spare", 00:20:53.956 "progress": { 00:20:53.956 "blocks": 14336, 00:20:53.956 "percent": 22 00:20:53.956 } 00:20:53.956 }, 00:20:53.956 "base_bdevs_list": [ 00:20:53.956 { 00:20:53.956 "name": "spare", 00:20:53.956 "uuid": "7aba8fa1-9cbc-5807-9cff-969980c97a27", 00:20:53.956 "is_configured": true, 00:20:53.956 "data_offset": 2048, 00:20:53.956 "data_size": 63488 00:20:53.956 }, 00:20:53.956 { 00:20:53.956 "name": "BaseBdev2", 00:20:53.956 "uuid": "1eab7d23-0a38-5eff-b0d6-e588d269f121", 00:20:53.956 "is_configured": true, 00:20:53.956 "data_offset": 2048, 00:20:53.956 "data_size": 63488 00:20:53.956 } 00:20:53.956 ] 00:20:53.956 }' 00:20:53.956 07:21:27 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:53.956 07:21:27 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:53.956 07:21:27 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:53.956 07:21:27 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:53.956 07:21:27 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:20:53.956 [2024-02-13 07:21:27.583311] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:53.956 [2024-02-13 07:21:27.583627] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:54.215 [2024-02-13 07:21:27.766735] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:54.215 [2024-02-13 07:21:27.892213] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:54.215 [2024-02-13 07:21:27.900809] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:54.473 [2024-02-13 07:21:27.933115] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:20:54.473 07:21:27 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:54.473 07:21:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:20:54.473 07:21:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:20:54.473 07:21:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:20:54.473 07:21:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:20:54.473 07:21:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:20:54.473 07:21:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:20:54.473 07:21:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:20:54.473 07:21:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:20:54.473 07:21:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:20:54.473 07:21:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:54.473 07:21:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:54.732 07:21:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:20:54.732 "name": "raid_bdev1", 00:20:54.732 "uuid": "530459ea-64ad-4e26-a2cb-51042943b4ca", 00:20:54.732 "strip_size_kb": 0, 00:20:54.732 "state": "online", 00:20:54.732 "raid_level": "raid1", 00:20:54.732 "superblock": true, 00:20:54.732 "num_base_bdevs": 2, 00:20:54.732 "num_base_bdevs_discovered": 1, 00:20:54.732 "num_base_bdevs_operational": 1, 00:20:54.732 "base_bdevs_list": [ 00:20:54.732 { 00:20:54.732 "name": null, 00:20:54.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:54.732 "is_configured": false, 00:20:54.732 "data_offset": 2048, 00:20:54.732 "data_size": 63488 00:20:54.732 }, 00:20:54.732 { 00:20:54.732 "name": "BaseBdev2", 00:20:54.732 "uuid": "1eab7d23-0a38-5eff-b0d6-e588d269f121", 00:20:54.732 "is_configured": true, 00:20:54.732 "data_offset": 2048, 00:20:54.732 "data_size": 63488 00:20:54.732 } 00:20:54.732 ] 00:20:54.732 }' 00:20:54.732 07:21:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:20:54.732 07:21:28 -- common/autotest_common.sh@10 -- # set +x 00:20:55.299 07:21:28 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:55.299 07:21:28 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:55.300 07:21:28 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:20:55.300 07:21:28 -- bdev/bdev_raid.sh@185 -- # local target=none 00:20:55.300 07:21:28 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:55.300 07:21:28 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:55.300 07:21:28 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.558 07:21:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:55.558 "name": "raid_bdev1", 00:20:55.558 "uuid": "530459ea-64ad-4e26-a2cb-51042943b4ca", 00:20:55.558 "strip_size_kb": 0, 00:20:55.558 "state": "online", 00:20:55.558 "raid_level": "raid1", 00:20:55.558 "superblock": true, 00:20:55.558 "num_base_bdevs": 2, 00:20:55.558 "num_base_bdevs_discovered": 1, 00:20:55.558 "num_base_bdevs_operational": 1, 00:20:55.558 "base_bdevs_list": [ 00:20:55.558 { 00:20:55.558 "name": null, 00:20:55.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:55.558 "is_configured": false, 00:20:55.558 "data_offset": 2048, 00:20:55.558 "data_size": 63488 00:20:55.558 }, 00:20:55.558 { 00:20:55.558 "name": "BaseBdev2", 00:20:55.558 "uuid": "1eab7d23-0a38-5eff-b0d6-e588d269f121", 00:20:55.558 "is_configured": true, 00:20:55.558 "data_offset": 2048, 00:20:55.558 "data_size": 63488 00:20:55.558 } 00:20:55.558 ] 00:20:55.558 }' 00:20:55.558 07:21:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:55.558 07:21:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:20:55.558 07:21:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:55.558 07:21:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:20:55.558 07:21:29 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:20:55.817 [2024-02-13 07:21:29.408684] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:20:55.817 [2024-02-13 07:21:29.408761] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:55.817 07:21:29 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:20:55.817 [2024-02-13 07:21:29.465580] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:20:55.817 [2024-02-13 07:21:29.467630] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:56.074 [2024-02-13 07:21:29.587629] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:56.074 [2024-02-13 07:21:29.588152] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:20:56.332 [2024-02-13 07:21:29.802479] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:56.332 [2024-02-13 07:21:29.802735] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:20:56.590 [2024-02-13 07:21:30.143842] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:20:56.590 [2024-02-13 07:21:30.263604] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:56.590 [2024-02-13 07:21:30.263771] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:20:56.849 07:21:30 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:56.849 07:21:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:56.849 07:21:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:56.849 07:21:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:56.849 07:21:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:56.849 07:21:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:56.849 07:21:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.108 [2024-02-13 07:21:30.584830] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:57.108 [2024-02-13 07:21:30.585189] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:20:57.108 07:21:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:57.108 "name": "raid_bdev1", 00:20:57.108 "uuid": "530459ea-64ad-4e26-a2cb-51042943b4ca", 00:20:57.108 "strip_size_kb": 0, 00:20:57.108 "state": "online", 00:20:57.108 "raid_level": "raid1", 00:20:57.108 "superblock": true, 00:20:57.108 "num_base_bdevs": 2, 00:20:57.108 "num_base_bdevs_discovered": 2, 00:20:57.108 "num_base_bdevs_operational": 2, 00:20:57.108 "process": { 00:20:57.108 "type": "rebuild", 00:20:57.108 "target": "spare", 00:20:57.108 "progress": { 00:20:57.108 "blocks": 14336, 00:20:57.108 "percent": 22 00:20:57.108 } 00:20:57.108 }, 00:20:57.108 "base_bdevs_list": [ 00:20:57.108 { 00:20:57.108 "name": "spare", 00:20:57.108 "uuid": "7aba8fa1-9cbc-5807-9cff-969980c97a27", 00:20:57.108 "is_configured": true, 00:20:57.108 "data_offset": 2048, 00:20:57.108 "data_size": 63488 00:20:57.108 }, 00:20:57.108 { 00:20:57.108 "name": "BaseBdev2", 00:20:57.108 "uuid": "1eab7d23-0a38-5eff-b0d6-e588d269f121", 00:20:57.108 "is_configured": true, 00:20:57.108 "data_offset": 2048, 00:20:57.108 "data_size": 63488 00:20:57.108 } 00:20:57.108 ] 00:20:57.108 }' 00:20:57.108 07:21:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:57.108 07:21:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.108 07:21:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:57.367 07:21:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.367 07:21:30 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:20:57.367 07:21:30 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:20:57.367 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:20:57.367 07:21:30 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:20:57.367 07:21:30 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:20:57.367 07:21:30 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:20:57.367 07:21:30 -- bdev/bdev_raid.sh@657 -- # local timeout=471 00:20:57.367 07:21:30 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:57.367 07:21:30 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:57.367 07:21:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:57.367 07:21:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:57.367 07:21:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:57.367 07:21:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:57.367 07:21:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:57.367 07:21:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:57.367 [2024-02-13 07:21:30.814845] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:20:57.625 07:21:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:57.625 "name": "raid_bdev1", 00:20:57.625 "uuid": "530459ea-64ad-4e26-a2cb-51042943b4ca", 00:20:57.625 "strip_size_kb": 0, 00:20:57.625 "state": "online", 00:20:57.625 "raid_level": "raid1", 00:20:57.625 "superblock": true, 00:20:57.625 "num_base_bdevs": 2, 00:20:57.625 "num_base_bdevs_discovered": 2, 00:20:57.625 "num_base_bdevs_operational": 2, 00:20:57.625 "process": { 00:20:57.625 "type": "rebuild", 00:20:57.625 "target": "spare", 00:20:57.625 "progress": { 00:20:57.625 "blocks": 18432, 00:20:57.625 "percent": 29 00:20:57.625 } 00:20:57.625 }, 00:20:57.625 "base_bdevs_list": [ 00:20:57.625 { 00:20:57.625 "name": "spare", 00:20:57.625 "uuid": "7aba8fa1-9cbc-5807-9cff-969980c97a27", 00:20:57.625 "is_configured": true, 00:20:57.625 "data_offset": 2048, 00:20:57.625 "data_size": 63488 00:20:57.625 }, 00:20:57.625 { 00:20:57.625 "name": "BaseBdev2", 00:20:57.625 "uuid": "1eab7d23-0a38-5eff-b0d6-e588d269f121", 00:20:57.625 "is_configured": true, 00:20:57.625 "data_offset": 2048, 00:20:57.625 "data_size": 63488 00:20:57.625 } 00:20:57.625 ] 00:20:57.625 }' 00:20:57.625 07:21:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:57.625 07:21:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:57.625 07:21:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:57.625 [2024-02-13 07:21:31.153582] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:20:57.625 07:21:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:57.625 07:21:31 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:57.625 [2024-02-13 07:21:31.276246] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:20:58.192 [2024-02-13 07:21:31.604184] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:20:58.192 [2024-02-13 07:21:31.737595] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:20:58.451 [2024-02-13 07:21:32.084711] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:20:58.719 07:21:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:58.719 07:21:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.719 07:21:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:58.719 07:21:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:58.719 07:21:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:58.719 07:21:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:58.719 07:21:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:58.719 07:21:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.991 07:21:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:20:58.991 "name": "raid_bdev1", 00:20:58.991 "uuid": "530459ea-64ad-4e26-a2cb-51042943b4ca", 00:20:58.991 "strip_size_kb": 0, 00:20:58.991 "state": "online", 00:20:58.991 "raid_level": "raid1", 00:20:58.991 "superblock": true, 00:20:58.991 "num_base_bdevs": 2, 00:20:58.991 "num_base_bdevs_discovered": 2, 00:20:58.991 "num_base_bdevs_operational": 2, 00:20:58.991 "process": { 00:20:58.991 "type": "rebuild", 00:20:58.991 "target": "spare", 00:20:58.991 "progress": { 00:20:58.991 "blocks": 36864, 00:20:58.991 "percent": 58 00:20:58.991 } 00:20:58.991 }, 00:20:58.991 "base_bdevs_list": [ 00:20:58.991 { 00:20:58.991 "name": "spare", 00:20:58.991 "uuid": "7aba8fa1-9cbc-5807-9cff-969980c97a27", 00:20:58.991 "is_configured": true, 00:20:58.991 "data_offset": 2048, 00:20:58.991 "data_size": 63488 00:20:58.991 }, 00:20:58.991 { 00:20:58.991 "name": "BaseBdev2", 00:20:58.991 "uuid": "1eab7d23-0a38-5eff-b0d6-e588d269f121", 00:20:58.991 "is_configured": true, 00:20:58.991 "data_offset": 2048, 00:20:58.991 "data_size": 63488 00:20:58.991 } 00:20:58.991 ] 00:20:58.991 }' 00:20:58.991 07:21:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:20:58.991 07:21:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:58.991 07:21:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:20:58.991 07:21:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.991 07:21:32 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:20:58.991 [2024-02-13 07:21:32.652583] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:20:59.927 07:21:33 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:20:59.927 07:21:33 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.927 07:21:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:20:59.927 07:21:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:20:59.927 07:21:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:20:59.927 07:21:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:20:59.927 07:21:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:20:59.927 07:21:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.185 07:21:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:00.185 "name": "raid_bdev1", 00:21:00.185 "uuid": "530459ea-64ad-4e26-a2cb-51042943b4ca", 00:21:00.185 "strip_size_kb": 0, 00:21:00.185 "state": "online", 00:21:00.185 "raid_level": "raid1", 00:21:00.185 "superblock": true, 00:21:00.185 "num_base_bdevs": 2, 00:21:00.185 "num_base_bdevs_discovered": 2, 00:21:00.185 "num_base_bdevs_operational": 2, 00:21:00.185 "process": { 00:21:00.185 "type": "rebuild", 00:21:00.185 "target": "spare", 00:21:00.185 "progress": { 00:21:00.185 "blocks": 61440, 00:21:00.185 "percent": 96 00:21:00.185 } 00:21:00.185 }, 00:21:00.185 "base_bdevs_list": [ 00:21:00.185 { 00:21:00.185 "name": "spare", 00:21:00.185 "uuid": "7aba8fa1-9cbc-5807-9cff-969980c97a27", 00:21:00.185 "is_configured": true, 00:21:00.185 "data_offset": 2048, 00:21:00.185 "data_size": 63488 00:21:00.185 }, 00:21:00.185 { 00:21:00.185 "name": "BaseBdev2", 00:21:00.185 "uuid": "1eab7d23-0a38-5eff-b0d6-e588d269f121", 00:21:00.185 "is_configured": true, 00:21:00.185 "data_offset": 2048, 00:21:00.185 "data_size": 63488 00:21:00.185 } 00:21:00.185 ] 00:21:00.185 }' 00:21:00.185 07:21:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:00.185 07:21:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:00.185 07:21:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:00.185 [2024-02-13 07:21:33.825626] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:00.185 07:21:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.185 07:21:33 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:00.444 [2024-02-13 07:21:33.920421] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:00.444 [2024-02-13 07:21:33.921793] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:01.381 07:21:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:01.381 07:21:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:01.381 07:21:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:01.381 07:21:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:01.381 07:21:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:01.381 07:21:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:01.381 07:21:34 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.381 07:21:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.640 07:21:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:01.640 "name": "raid_bdev1", 00:21:01.640 "uuid": "530459ea-64ad-4e26-a2cb-51042943b4ca", 00:21:01.640 "strip_size_kb": 0, 00:21:01.640 "state": "online", 00:21:01.640 "raid_level": "raid1", 00:21:01.640 "superblock": true, 00:21:01.640 "num_base_bdevs": 2, 00:21:01.640 "num_base_bdevs_discovered": 2, 00:21:01.640 "num_base_bdevs_operational": 2, 00:21:01.640 "base_bdevs_list": [ 00:21:01.640 { 00:21:01.640 "name": "spare", 00:21:01.640 "uuid": "7aba8fa1-9cbc-5807-9cff-969980c97a27", 00:21:01.640 "is_configured": true, 00:21:01.640 "data_offset": 2048, 00:21:01.640 "data_size": 63488 00:21:01.640 }, 00:21:01.640 { 00:21:01.640 "name": "BaseBdev2", 00:21:01.640 "uuid": "1eab7d23-0a38-5eff-b0d6-e588d269f121", 00:21:01.640 "is_configured": true, 00:21:01.640 "data_offset": 2048, 00:21:01.640 "data_size": 63488 00:21:01.640 } 00:21:01.640 ] 00:21:01.640 }' 00:21:01.640 07:21:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:01.640 07:21:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:01.640 07:21:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:01.640 07:21:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:01.640 07:21:35 -- bdev/bdev_raid.sh@660 -- # break 00:21:01.640 07:21:35 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:01.640 07:21:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:01.640 07:21:35 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:01.640 07:21:35 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:01.640 07:21:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:01.640 07:21:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.640 07:21:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.899 07:21:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:01.899 "name": "raid_bdev1", 00:21:01.899 "uuid": "530459ea-64ad-4e26-a2cb-51042943b4ca", 00:21:01.899 "strip_size_kb": 0, 00:21:01.899 "state": "online", 00:21:01.899 "raid_level": "raid1", 00:21:01.899 "superblock": true, 00:21:01.899 "num_base_bdevs": 2, 00:21:01.899 "num_base_bdevs_discovered": 2, 00:21:01.899 "num_base_bdevs_operational": 2, 00:21:01.899 "base_bdevs_list": [ 00:21:01.899 { 00:21:01.899 "name": "spare", 00:21:01.899 "uuid": "7aba8fa1-9cbc-5807-9cff-969980c97a27", 00:21:01.899 "is_configured": true, 00:21:01.899 "data_offset": 2048, 00:21:01.899 "data_size": 63488 00:21:01.899 }, 00:21:01.899 { 00:21:01.899 "name": "BaseBdev2", 00:21:01.899 "uuid": "1eab7d23-0a38-5eff-b0d6-e588d269f121", 00:21:01.899 "is_configured": true, 00:21:01.899 "data_offset": 2048, 00:21:01.899 "data_size": 63488 00:21:01.899 } 00:21:01.899 ] 00:21:01.899 }' 00:21:01.899 07:21:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:01.899 07:21:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:01.899 07:21:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:01.899 07:21:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:01.899 07:21:35 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:01.899 07:21:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:01.899 07:21:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:01.899 07:21:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:01.899 07:21:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:01.899 07:21:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:01.899 07:21:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:01.899 07:21:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:01.899 07:21:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:01.899 07:21:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:01.899 07:21:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:01.899 07:21:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.158 07:21:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:02.158 "name": "raid_bdev1", 00:21:02.158 "uuid": "530459ea-64ad-4e26-a2cb-51042943b4ca", 00:21:02.158 "strip_size_kb": 0, 00:21:02.158 "state": "online", 00:21:02.158 "raid_level": "raid1", 00:21:02.158 "superblock": true, 00:21:02.158 "num_base_bdevs": 2, 00:21:02.158 "num_base_bdevs_discovered": 2, 00:21:02.158 "num_base_bdevs_operational": 2, 00:21:02.158 "base_bdevs_list": [ 00:21:02.158 { 00:21:02.158 "name": "spare", 00:21:02.158 "uuid": "7aba8fa1-9cbc-5807-9cff-969980c97a27", 00:21:02.158 "is_configured": true, 00:21:02.158 "data_offset": 2048, 00:21:02.158 "data_size": 63488 00:21:02.158 }, 00:21:02.158 { 00:21:02.158 "name": "BaseBdev2", 00:21:02.158 "uuid": "1eab7d23-0a38-5eff-b0d6-e588d269f121", 00:21:02.158 "is_configured": true, 00:21:02.158 "data_offset": 2048, 00:21:02.158 "data_size": 63488 00:21:02.158 } 00:21:02.158 ] 00:21:02.158 }' 00:21:02.158 07:21:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:02.158 07:21:35 -- common/autotest_common.sh@10 -- # set +x 00:21:03.095 07:21:36 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:03.095 [2024-02-13 07:21:36.644443] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:03.095 [2024-02-13 07:21:36.644486] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:03.095 00:21:03.095 Latency(us) 00:21:03.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.095 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:21:03.095 raid_bdev1 : 11.99 108.26 324.79 0.00 0.00 12560.75 301.61 115819.99 00:21:03.095 =================================================================================================================== 00:21:03.095 Total : 108.26 324.79 0.00 0.00 12560.75 301.61 115819.99 00:21:03.095 [2024-02-13 07:21:36.731069] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.095 [2024-02-13 07:21:36.731127] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:03.095 [2024-02-13 07:21:36.731224] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:03.095 [2024-02-13 07:21:36.731238] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:21:03.095 0 00:21:03.095 07:21:36 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:03.095 07:21:36 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:03.354 07:21:36 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:03.354 07:21:36 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:21:03.354 07:21:36 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:21:03.354 07:21:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:03.354 07:21:36 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:03.354 07:21:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:03.354 07:21:36 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:03.354 07:21:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:03.354 07:21:36 -- bdev/nbd_common.sh@12 -- # local i 00:21:03.354 07:21:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:03.354 07:21:36 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:03.354 07:21:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:21:03.613 /dev/nbd0 00:21:03.613 07:21:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:03.613 07:21:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:03.613 07:21:37 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:21:03.613 07:21:37 -- common/autotest_common.sh@855 -- # local i 00:21:03.613 07:21:37 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:03.613 07:21:37 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:03.613 07:21:37 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:21:03.613 07:21:37 -- common/autotest_common.sh@859 -- # break 00:21:03.613 07:21:37 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:03.613 07:21:37 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:03.613 07:21:37 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:03.613 1+0 records in 00:21:03.613 1+0 records out 00:21:03.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265335 s, 15.4 MB/s 00:21:03.613 07:21:37 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:03.613 07:21:37 -- common/autotest_common.sh@872 -- # size=4096 00:21:03.613 07:21:37 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:03.613 07:21:37 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:03.613 07:21:37 -- common/autotest_common.sh@875 -- # return 0 00:21:03.613 07:21:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:03.613 07:21:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:03.613 07:21:37 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:21:03.613 07:21:37 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:21:03.613 07:21:37 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:21:03.613 07:21:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:03.613 07:21:37 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:03.613 07:21:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:03.613 07:21:37 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:03.613 07:21:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:03.613 07:21:37 -- bdev/nbd_common.sh@12 -- # local i 00:21:03.613 07:21:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:03.613 07:21:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:03.613 07:21:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:21:03.872 /dev/nbd1 00:21:03.872 07:21:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:03.872 07:21:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:03.872 07:21:37 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:21:03.872 07:21:37 -- common/autotest_common.sh@855 -- # local i 00:21:03.872 07:21:37 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:03.872 07:21:37 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:03.872 07:21:37 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:21:03.872 07:21:37 -- common/autotest_common.sh@859 -- # break 00:21:03.872 07:21:37 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:03.872 07:21:37 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:03.872 07:21:37 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:03.872 1+0 records in 00:21:03.872 1+0 records out 00:21:03.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256687 s, 16.0 MB/s 00:21:03.872 07:21:37 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:03.872 07:21:37 -- common/autotest_common.sh@872 -- # size=4096 00:21:03.872 07:21:37 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:03.872 07:21:37 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:03.872 07:21:37 -- common/autotest_common.sh@875 -- # return 0 00:21:03.872 07:21:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:03.872 07:21:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:03.872 07:21:37 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:04.131 07:21:37 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:21:04.131 07:21:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:04.131 07:21:37 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:04.131 07:21:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:04.131 07:21:37 -- bdev/nbd_common.sh@51 -- # local i 00:21:04.131 07:21:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:04.131 07:21:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:04.131 07:21:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:04.131 07:21:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:04.131 07:21:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:04.131 07:21:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:04.131 07:21:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:04.131 07:21:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:04.131 07:21:37 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:04.389 07:21:37 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:04.389 07:21:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:04.389 07:21:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:04.389 07:21:37 -- bdev/nbd_common.sh@41 -- # break 00:21:04.389 07:21:37 -- bdev/nbd_common.sh@45 -- # return 0 00:21:04.389 07:21:37 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:04.389 07:21:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:04.389 07:21:37 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:04.389 07:21:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:04.389 07:21:37 -- bdev/nbd_common.sh@51 -- # local i 00:21:04.389 07:21:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:04.389 07:21:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:04.648 07:21:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:04.648 07:21:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:04.648 07:21:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:04.648 07:21:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:04.648 07:21:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:04.648 07:21:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:04.648 07:21:38 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:04.648 07:21:38 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:04.648 07:21:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:04.648 07:21:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:04.648 07:21:38 -- bdev/nbd_common.sh@41 -- # break 00:21:04.648 07:21:38 -- bdev/nbd_common.sh@45 -- # return 0 00:21:04.648 07:21:38 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:04.648 07:21:38 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:04.648 07:21:38 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:04.648 07:21:38 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:04.906 07:21:38 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:05.164 [2024-02-13 07:21:38.712208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:05.164 [2024-02-13 07:21:38.712299] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.164 [2024-02-13 07:21:38.712337] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:05.164 [2024-02-13 07:21:38.712362] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.164 [2024-02-13 07:21:38.714353] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.164 [2024-02-13 07:21:38.714414] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:05.164 [2024-02-13 07:21:38.714514] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:05.164 [2024-02-13 07:21:38.714574] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:05.164 BaseBdev1 00:21:05.164 07:21:38 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:05.164 07:21:38 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:21:05.164 07:21:38 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:21:05.423 07:21:38 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:05.680 [2024-02-13 07:21:39.140323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:05.680 [2024-02-13 07:21:39.140378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.680 [2024-02-13 07:21:39.140413] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:21:05.680 [2024-02-13 07:21:39.140434] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.680 [2024-02-13 07:21:39.140769] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.680 [2024-02-13 07:21:39.140819] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:05.680 [2024-02-13 07:21:39.140899] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:21:05.680 [2024-02-13 07:21:39.140913] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:21:05.680 [2024-02-13 07:21:39.140919] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:05.680 [2024-02-13 07:21:39.140940] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state configuring 00:21:05.680 [2024-02-13 07:21:39.141003] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:05.680 BaseBdev2 00:21:05.680 07:21:39 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:05.681 07:21:39 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:05.939 [2024-02-13 07:21:39.564489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:05.939 [2024-02-13 07:21:39.564538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:05.939 [2024-02-13 07:21:39.564569] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:21:05.939 [2024-02-13 07:21:39.564585] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:05.939 [2024-02-13 07:21:39.564962] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:05.939 [2024-02-13 07:21:39.565006] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:05.939 [2024-02-13 07:21:39.565112] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:05.939 [2024-02-13 07:21:39.565146] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:05.939 spare 00:21:05.939 07:21:39 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:05.939 07:21:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:05.939 07:21:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:05.939 07:21:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:05.939 07:21:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:05.939 07:21:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:21:05.939 07:21:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:05.939 07:21:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:05.939 07:21:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:05.939 07:21:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:05.939 07:21:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:05.939 07:21:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:06.198 [2024-02-13 07:21:39.665235] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:21:06.198 [2024-02-13 07:21:39.665254] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:06.198 [2024-02-13 07:21:39.665353] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cee0 00:21:06.198 [2024-02-13 07:21:39.665662] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:21:06.198 [2024-02-13 07:21:39.665699] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:21:06.198 [2024-02-13 07:21:39.665826] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.198 07:21:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:06.198 "name": "raid_bdev1", 00:21:06.198 "uuid": "530459ea-64ad-4e26-a2cb-51042943b4ca", 00:21:06.198 "strip_size_kb": 0, 00:21:06.198 "state": "online", 00:21:06.198 "raid_level": "raid1", 00:21:06.198 "superblock": true, 00:21:06.198 "num_base_bdevs": 2, 00:21:06.198 "num_base_bdevs_discovered": 2, 00:21:06.198 "num_base_bdevs_operational": 2, 00:21:06.198 "base_bdevs_list": [ 00:21:06.198 { 00:21:06.198 "name": "spare", 00:21:06.198 "uuid": "7aba8fa1-9cbc-5807-9cff-969980c97a27", 00:21:06.198 "is_configured": true, 00:21:06.198 "data_offset": 2048, 00:21:06.198 "data_size": 63488 00:21:06.198 }, 00:21:06.198 { 00:21:06.198 "name": "BaseBdev2", 00:21:06.198 "uuid": "1eab7d23-0a38-5eff-b0d6-e588d269f121", 00:21:06.198 "is_configured": true, 00:21:06.198 "data_offset": 2048, 00:21:06.198 "data_size": 63488 00:21:06.198 } 00:21:06.198 ] 00:21:06.198 }' 00:21:06.198 07:21:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:06.198 07:21:39 -- common/autotest_common.sh@10 -- # set +x 00:21:06.770 07:21:40 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:06.770 07:21:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:06.770 07:21:40 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:06.770 07:21:40 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:06.770 07:21:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:06.770 07:21:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.770 07:21:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.035 07:21:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:07.035 "name": "raid_bdev1", 00:21:07.035 "uuid": "530459ea-64ad-4e26-a2cb-51042943b4ca", 00:21:07.035 "strip_size_kb": 0, 00:21:07.035 "state": "online", 00:21:07.035 "raid_level": "raid1", 00:21:07.035 "superblock": true, 00:21:07.035 "num_base_bdevs": 2, 00:21:07.035 "num_base_bdevs_discovered": 2, 00:21:07.035 "num_base_bdevs_operational": 2, 00:21:07.035 "base_bdevs_list": [ 00:21:07.035 { 00:21:07.035 "name": "spare", 00:21:07.035 "uuid": "7aba8fa1-9cbc-5807-9cff-969980c97a27", 00:21:07.035 "is_configured": true, 00:21:07.035 "data_offset": 2048, 00:21:07.035 "data_size": 63488 00:21:07.035 }, 00:21:07.035 { 00:21:07.035 "name": "BaseBdev2", 00:21:07.035 "uuid": "1eab7d23-0a38-5eff-b0d6-e588d269f121", 00:21:07.035 "is_configured": true, 00:21:07.035 "data_offset": 2048, 00:21:07.035 "data_size": 63488 00:21:07.035 } 00:21:07.035 ] 00:21:07.035 }' 00:21:07.035 07:21:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:07.035 07:21:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:07.035 07:21:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:07.035 07:21:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:07.035 07:21:40 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:07.035 07:21:40 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:07.293 07:21:40 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.293 07:21:40 -- bdev/bdev_raid.sh@709 -- # killprocess 129067 00:21:07.293 07:21:40 -- common/autotest_common.sh@924 -- # '[' -z 129067 ']' 00:21:07.293 07:21:40 -- common/autotest_common.sh@928 -- # kill -0 129067 00:21:07.293 07:21:40 -- common/autotest_common.sh@929 -- # uname 00:21:07.293 07:21:40 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:21:07.293 07:21:40 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 129067 00:21:07.293 killing process with pid 129067 00:21:07.293 Received shutdown signal, test time was about 16.129344 seconds 00:21:07.293 00:21:07.293 Latency(us) 00:21:07.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.293 =================================================================================================================== 00:21:07.293 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:07.294 07:21:40 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:21:07.294 07:21:40 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:21:07.294 07:21:40 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 129067' 00:21:07.294 07:21:40 -- common/autotest_common.sh@943 -- # kill 129067 00:21:07.294 07:21:40 -- common/autotest_common.sh@948 -- # wait 129067 00:21:07.294 [2024-02-13 07:21:40.856396] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:07.294 [2024-02-13 07:21:40.856454] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:07.294 [2024-02-13 07:21:40.856503] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:07.294 [2024-02-13 07:21:40.856527] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:21:07.552 [2024-02-13 07:21:41.008971] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:08.488 ************************************ 00:21:08.488 END TEST raid_rebuild_test_sb_io 00:21:08.488 ************************************ 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:08.488 00:21:08.488 real 0m21.471s 00:21:08.488 user 0m34.307s 00:21:08.488 sys 0m2.156s 00:21:08.488 07:21:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:08.488 07:21:42 -- common/autotest_common.sh@10 -- # set +x 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:21:08.488 07:21:42 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:21:08.488 07:21:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:21:08.488 07:21:42 -- common/autotest_common.sh@10 -- # set +x 00:21:08.488 ************************************ 00:21:08.488 START TEST raid_rebuild_test 00:21:08.488 ************************************ 00:21:08.488 07:21:42 -- common/autotest_common.sh@1102 -- # raid_rebuild_test raid1 4 false false 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@544 -- # raid_pid=129676 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@545 -- # waitforlisten 129676 /var/tmp/spdk-raid.sock 00:21:08.488 07:21:42 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:08.488 07:21:42 -- common/autotest_common.sh@817 -- # '[' -z 129676 ']' 00:21:08.488 07:21:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:08.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:08.488 07:21:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:08.488 07:21:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:08.488 07:21:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:08.488 07:21:42 -- common/autotest_common.sh@10 -- # set +x 00:21:08.488 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:08.488 Zero copy mechanism will not be used. 00:21:08.488 [2024-02-13 07:21:42.137535] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:21:08.488 [2024-02-13 07:21:42.137722] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129676 ] 00:21:08.746 [2024-02-13 07:21:42.299515] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.004 [2024-02-13 07:21:42.465370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.004 [2024-02-13 07:21:42.635463] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:09.573 07:21:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:09.573 07:21:43 -- common/autotest_common.sh@850 -- # return 0 00:21:09.573 07:21:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:09.573 07:21:43 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:09.573 07:21:43 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:09.831 BaseBdev1 00:21:09.831 07:21:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:09.831 07:21:43 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:09.831 07:21:43 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:10.089 BaseBdev2 00:21:10.089 07:21:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:10.089 07:21:43 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:10.089 07:21:43 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:21:10.349 BaseBdev3 00:21:10.349 07:21:43 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:10.349 07:21:43 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:10.349 07:21:43 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:21:10.349 BaseBdev4 00:21:10.607 07:21:44 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:10.607 spare_malloc 00:21:10.607 07:21:44 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:10.866 spare_delay 00:21:10.866 07:21:44 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:11.124 [2024-02-13 07:21:44.679733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:11.124 [2024-02-13 07:21:44.679804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:11.124 [2024-02-13 07:21:44.679837] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:21:11.124 [2024-02-13 07:21:44.679876] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:11.124 [2024-02-13 07:21:44.682130] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:11.124 [2024-02-13 07:21:44.682174] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:11.124 spare 00:21:11.124 07:21:44 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:11.383 [2024-02-13 07:21:44.863799] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:11.383 [2024-02-13 07:21:44.865278] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:11.383 [2024-02-13 07:21:44.865329] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:11.383 [2024-02-13 07:21:44.865372] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:11.383 [2024-02-13 07:21:44.865446] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:21:11.383 [2024-02-13 07:21:44.865457] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:21:11.383 [2024-02-13 07:21:44.865573] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:21:11.383 [2024-02-13 07:21:44.865858] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:21:11.383 [2024-02-13 07:21:44.865879] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:21:11.383 [2024-02-13 07:21:44.866018] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:11.383 07:21:44 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:11.383 07:21:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:11.383 07:21:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:11.383 07:21:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:11.383 07:21:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:11.383 07:21:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:11.383 07:21:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:11.383 07:21:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:11.383 07:21:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:11.383 07:21:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:11.383 07:21:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.383 07:21:44 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:11.641 07:21:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:11.641 "name": "raid_bdev1", 00:21:11.641 "uuid": "35464105-9b93-4b9b-bb10-6c152c75c0aa", 00:21:11.641 "strip_size_kb": 0, 00:21:11.641 "state": "online", 00:21:11.641 "raid_level": "raid1", 00:21:11.641 "superblock": false, 00:21:11.642 "num_base_bdevs": 4, 00:21:11.642 "num_base_bdevs_discovered": 4, 00:21:11.642 "num_base_bdevs_operational": 4, 00:21:11.642 "base_bdevs_list": [ 00:21:11.642 { 00:21:11.642 "name": "BaseBdev1", 00:21:11.642 "uuid": "cc25bb2a-d002-4dc0-b948-71014c96827a", 00:21:11.642 "is_configured": true, 00:21:11.642 "data_offset": 0, 00:21:11.642 "data_size": 65536 00:21:11.642 }, 00:21:11.642 { 00:21:11.642 "name": "BaseBdev2", 00:21:11.642 "uuid": "a92f2e70-87e2-4a1e-aa0d-39eeaa916d12", 00:21:11.642 "is_configured": true, 00:21:11.642 "data_offset": 0, 00:21:11.642 "data_size": 65536 00:21:11.642 }, 00:21:11.642 { 00:21:11.642 "name": "BaseBdev3", 00:21:11.642 "uuid": "433a32c4-2450-4110-9d31-995795579ebc", 00:21:11.642 "is_configured": true, 00:21:11.642 "data_offset": 0, 00:21:11.642 "data_size": 65536 00:21:11.642 }, 00:21:11.642 { 00:21:11.642 "name": "BaseBdev4", 00:21:11.642 "uuid": "0672d3e8-8e28-4529-ae03-8b705561fbcf", 00:21:11.642 "is_configured": true, 00:21:11.642 "data_offset": 0, 00:21:11.642 "data_size": 65536 00:21:11.642 } 00:21:11.642 ] 00:21:11.642 }' 00:21:11.642 07:21:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:11.642 07:21:45 -- common/autotest_common.sh@10 -- # set +x 00:21:12.209 07:21:45 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:12.209 07:21:45 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:12.209 [2024-02-13 07:21:45.852174] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:12.209 07:21:45 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:21:12.209 07:21:45 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:12.209 07:21:45 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:12.467 07:21:46 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:21:12.467 07:21:46 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:12.467 07:21:46 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:12.467 07:21:46 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:12.467 07:21:46 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:12.467 07:21:46 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:12.467 07:21:46 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:12.467 07:21:46 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:12.467 07:21:46 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:12.467 07:21:46 -- bdev/nbd_common.sh@12 -- # local i 00:21:12.467 07:21:46 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:12.467 07:21:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:12.467 07:21:46 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:12.725 [2024-02-13 07:21:46.204034] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:21:12.725 /dev/nbd0 00:21:12.725 07:21:46 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:12.725 07:21:46 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:12.726 07:21:46 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:21:12.726 07:21:46 -- common/autotest_common.sh@855 -- # local i 00:21:12.726 07:21:46 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:12.726 07:21:46 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:12.726 07:21:46 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:21:12.726 07:21:46 -- common/autotest_common.sh@859 -- # break 00:21:12.726 07:21:46 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:12.726 07:21:46 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:12.726 07:21:46 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:12.726 1+0 records in 00:21:12.726 1+0 records out 00:21:12.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383919 s, 10.7 MB/s 00:21:12.726 07:21:46 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:12.726 07:21:46 -- common/autotest_common.sh@872 -- # size=4096 00:21:12.726 07:21:46 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:12.726 07:21:46 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:12.726 07:21:46 -- common/autotest_common.sh@875 -- # return 0 00:21:12.726 07:21:46 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:12.726 07:21:46 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:12.726 07:21:46 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:12.726 07:21:46 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:12.726 07:21:46 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:21:17.993 65536+0 records in 00:21:17.993 65536+0 records out 00:21:17.993 33554432 bytes (34 MB, 32 MiB) copied, 5.34651 s, 6.3 MB/s 00:21:17.993 07:21:51 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:17.993 07:21:51 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:17.993 07:21:51 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:17.993 07:21:51 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:17.993 07:21:51 -- bdev/nbd_common.sh@51 -- # local i 00:21:17.993 07:21:51 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:17.993 07:21:51 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:18.252 07:21:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:18.252 07:21:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:18.252 07:21:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:18.252 07:21:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:18.252 07:21:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:18.252 07:21:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:18.252 [2024-02-13 07:21:51.867267] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:18.252 07:21:51 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:18.510 07:21:51 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:18.510 07:21:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:18.510 07:21:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:18.510 07:21:51 -- bdev/nbd_common.sh@41 -- # break 00:21:18.510 07:21:51 -- bdev/nbd_common.sh@45 -- # return 0 00:21:18.510 07:21:51 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:18.768 [2024-02-13 07:21:52.218977] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:18.768 07:21:52 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:18.768 07:21:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:18.768 07:21:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:18.768 07:21:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:18.768 07:21:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:18.768 07:21:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:18.768 07:21:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:18.768 07:21:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:18.768 07:21:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:18.768 07:21:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:18.768 07:21:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:18.768 07:21:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:18.768 07:21:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:18.768 "name": "raid_bdev1", 00:21:18.768 "uuid": "35464105-9b93-4b9b-bb10-6c152c75c0aa", 00:21:18.768 "strip_size_kb": 0, 00:21:18.768 "state": "online", 00:21:18.768 "raid_level": "raid1", 00:21:18.768 "superblock": false, 00:21:18.768 "num_base_bdevs": 4, 00:21:18.768 "num_base_bdevs_discovered": 3, 00:21:18.768 "num_base_bdevs_operational": 3, 00:21:18.768 "base_bdevs_list": [ 00:21:18.768 { 00:21:18.768 "name": null, 00:21:18.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:18.768 "is_configured": false, 00:21:18.768 "data_offset": 0, 00:21:18.768 "data_size": 65536 00:21:18.768 }, 00:21:18.768 { 00:21:18.768 "name": "BaseBdev2", 00:21:18.768 "uuid": "a92f2e70-87e2-4a1e-aa0d-39eeaa916d12", 00:21:18.768 "is_configured": true, 00:21:18.768 "data_offset": 0, 00:21:18.768 "data_size": 65536 00:21:18.768 }, 00:21:18.768 { 00:21:18.768 "name": "BaseBdev3", 00:21:18.768 "uuid": "433a32c4-2450-4110-9d31-995795579ebc", 00:21:18.768 "is_configured": true, 00:21:18.768 "data_offset": 0, 00:21:18.768 "data_size": 65536 00:21:18.768 }, 00:21:18.768 { 00:21:18.768 "name": "BaseBdev4", 00:21:18.768 "uuid": "0672d3e8-8e28-4529-ae03-8b705561fbcf", 00:21:18.768 "is_configured": true, 00:21:18.768 "data_offset": 0, 00:21:18.768 "data_size": 65536 00:21:18.768 } 00:21:18.768 ] 00:21:18.768 }' 00:21:18.768 07:21:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:18.768 07:21:52 -- common/autotest_common.sh@10 -- # set +x 00:21:19.702 07:21:53 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:19.702 [2024-02-13 07:21:53.288578] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:19.702 [2024-02-13 07:21:53.288620] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:19.702 [2024-02-13 07:21:53.298549] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b6a0 00:21:19.702 [2024-02-13 07:21:53.300049] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:19.702 07:21:53 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:20.637 07:21:54 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:20.637 07:21:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:20.637 07:21:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:20.638 07:21:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:20.638 07:21:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:20.638 07:21:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:20.638 07:21:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:20.896 07:21:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:20.896 "name": "raid_bdev1", 00:21:20.896 "uuid": "35464105-9b93-4b9b-bb10-6c152c75c0aa", 00:21:20.896 "strip_size_kb": 0, 00:21:20.896 "state": "online", 00:21:20.896 "raid_level": "raid1", 00:21:20.896 "superblock": false, 00:21:20.896 "num_base_bdevs": 4, 00:21:20.896 "num_base_bdevs_discovered": 4, 00:21:20.896 "num_base_bdevs_operational": 4, 00:21:20.896 "process": { 00:21:20.896 "type": "rebuild", 00:21:20.896 "target": "spare", 00:21:20.896 "progress": { 00:21:20.896 "blocks": 24576, 00:21:20.896 "percent": 37 00:21:20.896 } 00:21:20.896 }, 00:21:20.896 "base_bdevs_list": [ 00:21:20.896 { 00:21:20.896 "name": "spare", 00:21:20.896 "uuid": "6996a78c-5c0d-56e8-89e0-151be628034a", 00:21:20.896 "is_configured": true, 00:21:20.896 "data_offset": 0, 00:21:20.896 "data_size": 65536 00:21:20.896 }, 00:21:20.896 { 00:21:20.896 "name": "BaseBdev2", 00:21:20.896 "uuid": "a92f2e70-87e2-4a1e-aa0d-39eeaa916d12", 00:21:20.896 "is_configured": true, 00:21:20.896 "data_offset": 0, 00:21:20.896 "data_size": 65536 00:21:20.896 }, 00:21:20.896 { 00:21:20.896 "name": "BaseBdev3", 00:21:20.896 "uuid": "433a32c4-2450-4110-9d31-995795579ebc", 00:21:20.896 "is_configured": true, 00:21:20.896 "data_offset": 0, 00:21:20.896 "data_size": 65536 00:21:20.896 }, 00:21:20.896 { 00:21:20.896 "name": "BaseBdev4", 00:21:20.896 "uuid": "0672d3e8-8e28-4529-ae03-8b705561fbcf", 00:21:20.896 "is_configured": true, 00:21:20.896 "data_offset": 0, 00:21:20.896 "data_size": 65536 00:21:20.896 } 00:21:20.896 ] 00:21:20.896 }' 00:21:20.896 07:21:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:21.154 07:21:54 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:21.154 07:21:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:21.154 07:21:54 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:21.154 07:21:54 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:21.412 [2024-02-13 07:21:54.898938] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:21.412 [2024-02-13 07:21:54.908983] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:21.412 [2024-02-13 07:21:54.909115] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:21.412 07:21:54 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:21.413 07:21:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:21.413 07:21:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:21.413 07:21:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:21.413 07:21:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:21.413 07:21:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:21.413 07:21:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:21.413 07:21:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:21.413 07:21:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:21.413 07:21:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:21.413 07:21:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:21.413 07:21:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:21.671 07:21:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:21.671 "name": "raid_bdev1", 00:21:21.671 "uuid": "35464105-9b93-4b9b-bb10-6c152c75c0aa", 00:21:21.671 "strip_size_kb": 0, 00:21:21.671 "state": "online", 00:21:21.671 "raid_level": "raid1", 00:21:21.671 "superblock": false, 00:21:21.671 "num_base_bdevs": 4, 00:21:21.671 "num_base_bdevs_discovered": 3, 00:21:21.671 "num_base_bdevs_operational": 3, 00:21:21.671 "base_bdevs_list": [ 00:21:21.671 { 00:21:21.671 "name": null, 00:21:21.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:21.671 "is_configured": false, 00:21:21.671 "data_offset": 0, 00:21:21.671 "data_size": 65536 00:21:21.671 }, 00:21:21.671 { 00:21:21.671 "name": "BaseBdev2", 00:21:21.671 "uuid": "a92f2e70-87e2-4a1e-aa0d-39eeaa916d12", 00:21:21.671 "is_configured": true, 00:21:21.671 "data_offset": 0, 00:21:21.671 "data_size": 65536 00:21:21.671 }, 00:21:21.671 { 00:21:21.671 "name": "BaseBdev3", 00:21:21.671 "uuid": "433a32c4-2450-4110-9d31-995795579ebc", 00:21:21.671 "is_configured": true, 00:21:21.671 "data_offset": 0, 00:21:21.671 "data_size": 65536 00:21:21.671 }, 00:21:21.671 { 00:21:21.671 "name": "BaseBdev4", 00:21:21.671 "uuid": "0672d3e8-8e28-4529-ae03-8b705561fbcf", 00:21:21.671 "is_configured": true, 00:21:21.671 "data_offset": 0, 00:21:21.671 "data_size": 65536 00:21:21.671 } 00:21:21.671 ] 00:21:21.671 }' 00:21:21.671 07:21:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:21.671 07:21:55 -- common/autotest_common.sh@10 -- # set +x 00:21:22.238 07:21:55 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:22.238 07:21:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:22.238 07:21:55 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:22.238 07:21:55 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:22.238 07:21:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:22.238 07:21:55 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:22.238 07:21:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:22.497 07:21:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:22.497 "name": "raid_bdev1", 00:21:22.497 "uuid": "35464105-9b93-4b9b-bb10-6c152c75c0aa", 00:21:22.497 "strip_size_kb": 0, 00:21:22.497 "state": "online", 00:21:22.497 "raid_level": "raid1", 00:21:22.497 "superblock": false, 00:21:22.497 "num_base_bdevs": 4, 00:21:22.497 "num_base_bdevs_discovered": 3, 00:21:22.497 "num_base_bdevs_operational": 3, 00:21:22.497 "base_bdevs_list": [ 00:21:22.497 { 00:21:22.497 "name": null, 00:21:22.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:22.497 "is_configured": false, 00:21:22.497 "data_offset": 0, 00:21:22.497 "data_size": 65536 00:21:22.497 }, 00:21:22.497 { 00:21:22.497 "name": "BaseBdev2", 00:21:22.497 "uuid": "a92f2e70-87e2-4a1e-aa0d-39eeaa916d12", 00:21:22.497 "is_configured": true, 00:21:22.497 "data_offset": 0, 00:21:22.497 "data_size": 65536 00:21:22.497 }, 00:21:22.497 { 00:21:22.497 "name": "BaseBdev3", 00:21:22.497 "uuid": "433a32c4-2450-4110-9d31-995795579ebc", 00:21:22.497 "is_configured": true, 00:21:22.497 "data_offset": 0, 00:21:22.497 "data_size": 65536 00:21:22.497 }, 00:21:22.497 { 00:21:22.497 "name": "BaseBdev4", 00:21:22.497 "uuid": "0672d3e8-8e28-4529-ae03-8b705561fbcf", 00:21:22.497 "is_configured": true, 00:21:22.497 "data_offset": 0, 00:21:22.497 "data_size": 65536 00:21:22.497 } 00:21:22.497 ] 00:21:22.497 }' 00:21:22.497 07:21:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:22.497 07:21:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:22.497 07:21:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:22.497 07:21:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:22.497 07:21:56 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:22.756 [2024-02-13 07:21:56.391553] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:22.756 [2024-02-13 07:21:56.391626] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:22.756 [2024-02-13 07:21:56.401887] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0b840 00:21:22.756 [2024-02-13 07:21:56.403784] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:22.756 07:21:56 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:24.134 07:21:57 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.134 07:21:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:24.134 07:21:57 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:24.134 07:21:57 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:24.134 07:21:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:24.134 07:21:57 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.134 07:21:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.134 07:21:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:24.134 "name": "raid_bdev1", 00:21:24.134 "uuid": "35464105-9b93-4b9b-bb10-6c152c75c0aa", 00:21:24.134 "strip_size_kb": 0, 00:21:24.134 "state": "online", 00:21:24.134 "raid_level": "raid1", 00:21:24.134 "superblock": false, 00:21:24.134 "num_base_bdevs": 4, 00:21:24.134 "num_base_bdevs_discovered": 4, 00:21:24.134 "num_base_bdevs_operational": 4, 00:21:24.134 "process": { 00:21:24.134 "type": "rebuild", 00:21:24.134 "target": "spare", 00:21:24.134 "progress": { 00:21:24.134 "blocks": 24576, 00:21:24.134 "percent": 37 00:21:24.134 } 00:21:24.134 }, 00:21:24.134 "base_bdevs_list": [ 00:21:24.134 { 00:21:24.134 "name": "spare", 00:21:24.134 "uuid": "6996a78c-5c0d-56e8-89e0-151be628034a", 00:21:24.134 "is_configured": true, 00:21:24.134 "data_offset": 0, 00:21:24.134 "data_size": 65536 00:21:24.134 }, 00:21:24.134 { 00:21:24.134 "name": "BaseBdev2", 00:21:24.134 "uuid": "a92f2e70-87e2-4a1e-aa0d-39eeaa916d12", 00:21:24.134 "is_configured": true, 00:21:24.134 "data_offset": 0, 00:21:24.134 "data_size": 65536 00:21:24.134 }, 00:21:24.134 { 00:21:24.134 "name": "BaseBdev3", 00:21:24.134 "uuid": "433a32c4-2450-4110-9d31-995795579ebc", 00:21:24.134 "is_configured": true, 00:21:24.134 "data_offset": 0, 00:21:24.134 "data_size": 65536 00:21:24.134 }, 00:21:24.134 { 00:21:24.134 "name": "BaseBdev4", 00:21:24.134 "uuid": "0672d3e8-8e28-4529-ae03-8b705561fbcf", 00:21:24.134 "is_configured": true, 00:21:24.134 "data_offset": 0, 00:21:24.134 "data_size": 65536 00:21:24.134 } 00:21:24.134 ] 00:21:24.134 }' 00:21:24.134 07:21:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:24.134 07:21:57 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:24.134 07:21:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:24.134 07:21:57 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:24.134 07:21:57 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:21:24.134 07:21:57 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:24.134 07:21:57 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:24.134 07:21:57 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:24.134 07:21:57 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:24.393 [2024-02-13 07:21:57.946242] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:24.393 [2024-02-13 07:21:58.014029] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0b840 00:21:24.393 07:21:58 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:24.393 07:21:58 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:24.393 07:21:58 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.393 07:21:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:24.393 07:21:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:24.393 07:21:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:24.393 07:21:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:24.393 07:21:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.393 07:21:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.652 07:21:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:24.652 "name": "raid_bdev1", 00:21:24.652 "uuid": "35464105-9b93-4b9b-bb10-6c152c75c0aa", 00:21:24.652 "strip_size_kb": 0, 00:21:24.652 "state": "online", 00:21:24.652 "raid_level": "raid1", 00:21:24.652 "superblock": false, 00:21:24.652 "num_base_bdevs": 4, 00:21:24.652 "num_base_bdevs_discovered": 3, 00:21:24.652 "num_base_bdevs_operational": 3, 00:21:24.652 "process": { 00:21:24.652 "type": "rebuild", 00:21:24.652 "target": "spare", 00:21:24.652 "progress": { 00:21:24.652 "blocks": 36864, 00:21:24.652 "percent": 56 00:21:24.652 } 00:21:24.652 }, 00:21:24.652 "base_bdevs_list": [ 00:21:24.652 { 00:21:24.652 "name": "spare", 00:21:24.652 "uuid": "6996a78c-5c0d-56e8-89e0-151be628034a", 00:21:24.652 "is_configured": true, 00:21:24.652 "data_offset": 0, 00:21:24.652 "data_size": 65536 00:21:24.652 }, 00:21:24.652 { 00:21:24.652 "name": null, 00:21:24.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.652 "is_configured": false, 00:21:24.652 "data_offset": 0, 00:21:24.652 "data_size": 65536 00:21:24.652 }, 00:21:24.652 { 00:21:24.652 "name": "BaseBdev3", 00:21:24.652 "uuid": "433a32c4-2450-4110-9d31-995795579ebc", 00:21:24.652 "is_configured": true, 00:21:24.652 "data_offset": 0, 00:21:24.652 "data_size": 65536 00:21:24.652 }, 00:21:24.652 { 00:21:24.652 "name": "BaseBdev4", 00:21:24.652 "uuid": "0672d3e8-8e28-4529-ae03-8b705561fbcf", 00:21:24.652 "is_configured": true, 00:21:24.652 "data_offset": 0, 00:21:24.652 "data_size": 65536 00:21:24.652 } 00:21:24.652 ] 00:21:24.652 }' 00:21:24.652 07:21:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:24.652 07:21:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:24.652 07:21:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:24.910 07:21:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:24.910 07:21:58 -- bdev/bdev_raid.sh@657 -- # local timeout=499 00:21:24.910 07:21:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:24.910 07:21:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:24.910 07:21:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:24.910 07:21:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:24.910 07:21:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:24.910 07:21:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:24.910 07:21:58 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:24.911 07:21:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:24.911 07:21:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:24.911 "name": "raid_bdev1", 00:21:24.911 "uuid": "35464105-9b93-4b9b-bb10-6c152c75c0aa", 00:21:24.911 "strip_size_kb": 0, 00:21:24.911 "state": "online", 00:21:24.911 "raid_level": "raid1", 00:21:24.911 "superblock": false, 00:21:24.911 "num_base_bdevs": 4, 00:21:24.911 "num_base_bdevs_discovered": 3, 00:21:24.911 "num_base_bdevs_operational": 3, 00:21:24.911 "process": { 00:21:24.911 "type": "rebuild", 00:21:24.911 "target": "spare", 00:21:24.911 "progress": { 00:21:24.911 "blocks": 43008, 00:21:24.911 "percent": 65 00:21:24.911 } 00:21:24.911 }, 00:21:24.911 "base_bdevs_list": [ 00:21:24.911 { 00:21:24.911 "name": "spare", 00:21:24.911 "uuid": "6996a78c-5c0d-56e8-89e0-151be628034a", 00:21:24.911 "is_configured": true, 00:21:24.911 "data_offset": 0, 00:21:24.911 "data_size": 65536 00:21:24.911 }, 00:21:24.911 { 00:21:24.911 "name": null, 00:21:24.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:24.911 "is_configured": false, 00:21:24.911 "data_offset": 0, 00:21:24.911 "data_size": 65536 00:21:24.911 }, 00:21:24.911 { 00:21:24.911 "name": "BaseBdev3", 00:21:24.911 "uuid": "433a32c4-2450-4110-9d31-995795579ebc", 00:21:24.911 "is_configured": true, 00:21:24.911 "data_offset": 0, 00:21:24.911 "data_size": 65536 00:21:24.911 }, 00:21:24.911 { 00:21:24.911 "name": "BaseBdev4", 00:21:24.911 "uuid": "0672d3e8-8e28-4529-ae03-8b705561fbcf", 00:21:24.911 "is_configured": true, 00:21:24.911 "data_offset": 0, 00:21:24.911 "data_size": 65536 00:21:24.911 } 00:21:24.911 ] 00:21:24.911 }' 00:21:24.911 07:21:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:25.170 07:21:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:25.170 07:21:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:25.170 07:21:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:25.170 07:21:58 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:26.107 [2024-02-13 07:21:59.624819] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:26.107 [2024-02-13 07:21:59.624903] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:26.107 [2024-02-13 07:21:59.625001] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:26.107 07:21:59 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:26.107 07:21:59 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:26.107 07:21:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:26.107 07:21:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:26.107 07:21:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:26.107 07:21:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:26.107 07:21:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.107 07:21:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.365 07:21:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:26.365 "name": "raid_bdev1", 00:21:26.365 "uuid": "35464105-9b93-4b9b-bb10-6c152c75c0aa", 00:21:26.365 "strip_size_kb": 0, 00:21:26.365 "state": "online", 00:21:26.365 "raid_level": "raid1", 00:21:26.365 "superblock": false, 00:21:26.365 "num_base_bdevs": 4, 00:21:26.365 "num_base_bdevs_discovered": 3, 00:21:26.365 "num_base_bdevs_operational": 3, 00:21:26.365 "base_bdevs_list": [ 00:21:26.365 { 00:21:26.366 "name": "spare", 00:21:26.366 "uuid": "6996a78c-5c0d-56e8-89e0-151be628034a", 00:21:26.366 "is_configured": true, 00:21:26.366 "data_offset": 0, 00:21:26.366 "data_size": 65536 00:21:26.366 }, 00:21:26.366 { 00:21:26.366 "name": null, 00:21:26.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.366 "is_configured": false, 00:21:26.366 "data_offset": 0, 00:21:26.366 "data_size": 65536 00:21:26.366 }, 00:21:26.366 { 00:21:26.366 "name": "BaseBdev3", 00:21:26.366 "uuid": "433a32c4-2450-4110-9d31-995795579ebc", 00:21:26.366 "is_configured": true, 00:21:26.366 "data_offset": 0, 00:21:26.366 "data_size": 65536 00:21:26.366 }, 00:21:26.366 { 00:21:26.366 "name": "BaseBdev4", 00:21:26.366 "uuid": "0672d3e8-8e28-4529-ae03-8b705561fbcf", 00:21:26.366 "is_configured": true, 00:21:26.366 "data_offset": 0, 00:21:26.366 "data_size": 65536 00:21:26.366 } 00:21:26.366 ] 00:21:26.366 }' 00:21:26.366 07:21:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:26.366 07:22:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:26.366 07:22:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:26.625 07:22:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:26.625 07:22:00 -- bdev/bdev_raid.sh@660 -- # break 00:21:26.625 07:22:00 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:26.625 07:22:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:26.625 07:22:00 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:26.625 07:22:00 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:26.625 07:22:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:26.625 07:22:00 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.625 07:22:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:26.889 07:22:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:26.889 "name": "raid_bdev1", 00:21:26.889 "uuid": "35464105-9b93-4b9b-bb10-6c152c75c0aa", 00:21:26.889 "strip_size_kb": 0, 00:21:26.889 "state": "online", 00:21:26.889 "raid_level": "raid1", 00:21:26.889 "superblock": false, 00:21:26.889 "num_base_bdevs": 4, 00:21:26.889 "num_base_bdevs_discovered": 3, 00:21:26.889 "num_base_bdevs_operational": 3, 00:21:26.889 "base_bdevs_list": [ 00:21:26.889 { 00:21:26.889 "name": "spare", 00:21:26.889 "uuid": "6996a78c-5c0d-56e8-89e0-151be628034a", 00:21:26.889 "is_configured": true, 00:21:26.889 "data_offset": 0, 00:21:26.889 "data_size": 65536 00:21:26.889 }, 00:21:26.889 { 00:21:26.889 "name": null, 00:21:26.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:26.889 "is_configured": false, 00:21:26.889 "data_offset": 0, 00:21:26.889 "data_size": 65536 00:21:26.889 }, 00:21:26.889 { 00:21:26.889 "name": "BaseBdev3", 00:21:26.889 "uuid": "433a32c4-2450-4110-9d31-995795579ebc", 00:21:26.889 "is_configured": true, 00:21:26.889 "data_offset": 0, 00:21:26.889 "data_size": 65536 00:21:26.889 }, 00:21:26.889 { 00:21:26.889 "name": "BaseBdev4", 00:21:26.889 "uuid": "0672d3e8-8e28-4529-ae03-8b705561fbcf", 00:21:26.889 "is_configured": true, 00:21:26.889 "data_offset": 0, 00:21:26.889 "data_size": 65536 00:21:26.889 } 00:21:26.889 ] 00:21:26.889 }' 00:21:26.889 07:22:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:26.889 07:22:00 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:26.889 07:22:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:26.889 07:22:00 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:26.889 07:22:00 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:26.889 07:22:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:26.889 07:22:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:26.889 07:22:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:26.889 07:22:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:26.889 07:22:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:26.889 07:22:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:26.889 07:22:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:26.889 07:22:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:26.889 07:22:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:26.889 07:22:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:26.889 07:22:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:27.155 07:22:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:27.156 "name": "raid_bdev1", 00:21:27.156 "uuid": "35464105-9b93-4b9b-bb10-6c152c75c0aa", 00:21:27.156 "strip_size_kb": 0, 00:21:27.156 "state": "online", 00:21:27.156 "raid_level": "raid1", 00:21:27.156 "superblock": false, 00:21:27.156 "num_base_bdevs": 4, 00:21:27.156 "num_base_bdevs_discovered": 3, 00:21:27.156 "num_base_bdevs_operational": 3, 00:21:27.156 "base_bdevs_list": [ 00:21:27.156 { 00:21:27.156 "name": "spare", 00:21:27.156 "uuid": "6996a78c-5c0d-56e8-89e0-151be628034a", 00:21:27.156 "is_configured": true, 00:21:27.156 "data_offset": 0, 00:21:27.156 "data_size": 65536 00:21:27.156 }, 00:21:27.156 { 00:21:27.156 "name": null, 00:21:27.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:27.156 "is_configured": false, 00:21:27.156 "data_offset": 0, 00:21:27.156 "data_size": 65536 00:21:27.156 }, 00:21:27.156 { 00:21:27.156 "name": "BaseBdev3", 00:21:27.156 "uuid": "433a32c4-2450-4110-9d31-995795579ebc", 00:21:27.156 "is_configured": true, 00:21:27.156 "data_offset": 0, 00:21:27.156 "data_size": 65536 00:21:27.156 }, 00:21:27.156 { 00:21:27.156 "name": "BaseBdev4", 00:21:27.156 "uuid": "0672d3e8-8e28-4529-ae03-8b705561fbcf", 00:21:27.156 "is_configured": true, 00:21:27.156 "data_offset": 0, 00:21:27.156 "data_size": 65536 00:21:27.156 } 00:21:27.156 ] 00:21:27.156 }' 00:21:27.156 07:22:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:27.156 07:22:00 -- common/autotest_common.sh@10 -- # set +x 00:21:27.723 07:22:01 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:27.982 [2024-02-13 07:22:01.604914] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:27.982 [2024-02-13 07:22:01.604943] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:27.982 [2024-02-13 07:22:01.605016] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:27.982 [2024-02-13 07:22:01.605128] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:27.982 [2024-02-13 07:22:01.605141] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:21:27.982 07:22:01 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:27.982 07:22:01 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:28.241 07:22:01 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:28.241 07:22:01 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:28.241 07:22:01 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:28.241 07:22:01 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:28.241 07:22:01 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:28.241 07:22:01 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:28.241 07:22:01 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:28.241 07:22:01 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:28.241 07:22:01 -- bdev/nbd_common.sh@12 -- # local i 00:21:28.241 07:22:01 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:28.241 07:22:01 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:28.241 07:22:01 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:28.499 /dev/nbd0 00:21:28.499 07:22:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:28.499 07:22:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:28.499 07:22:02 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:21:28.499 07:22:02 -- common/autotest_common.sh@855 -- # local i 00:21:28.499 07:22:02 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:28.499 07:22:02 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:28.499 07:22:02 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:21:28.499 07:22:02 -- common/autotest_common.sh@859 -- # break 00:21:28.499 07:22:02 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:28.499 07:22:02 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:28.499 07:22:02 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:28.499 1+0 records in 00:21:28.499 1+0 records out 00:21:28.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429428 s, 9.5 MB/s 00:21:28.499 07:22:02 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:28.499 07:22:02 -- common/autotest_common.sh@872 -- # size=4096 00:21:28.499 07:22:02 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:28.499 07:22:02 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:28.499 07:22:02 -- common/autotest_common.sh@875 -- # return 0 00:21:28.499 07:22:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:28.499 07:22:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:28.499 07:22:02 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:28.757 /dev/nbd1 00:21:28.757 07:22:02 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:28.757 07:22:02 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:28.757 07:22:02 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:21:28.757 07:22:02 -- common/autotest_common.sh@855 -- # local i 00:21:28.757 07:22:02 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:28.757 07:22:02 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:28.757 07:22:02 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:21:28.757 07:22:02 -- common/autotest_common.sh@859 -- # break 00:21:28.757 07:22:02 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:28.757 07:22:02 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:28.757 07:22:02 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:28.757 1+0 records in 00:21:28.757 1+0 records out 00:21:28.757 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000609547 s, 6.7 MB/s 00:21:28.757 07:22:02 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:28.757 07:22:02 -- common/autotest_common.sh@872 -- # size=4096 00:21:28.757 07:22:02 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:28.757 07:22:02 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:28.757 07:22:02 -- common/autotest_common.sh@875 -- # return 0 00:21:28.757 07:22:02 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:28.757 07:22:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:28.757 07:22:02 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:21:29.015 07:22:02 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:29.015 07:22:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:29.015 07:22:02 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:29.015 07:22:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:29.015 07:22:02 -- bdev/nbd_common.sh@51 -- # local i 00:21:29.015 07:22:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:29.015 07:22:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:29.273 07:22:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:29.273 07:22:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:29.273 07:22:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:29.273 07:22:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:29.273 07:22:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.273 07:22:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:29.273 07:22:02 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:29.273 07:22:02 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:29.273 07:22:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.273 07:22:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:29.273 07:22:02 -- bdev/nbd_common.sh@41 -- # break 00:21:29.273 07:22:02 -- bdev/nbd_common.sh@45 -- # return 0 00:21:29.273 07:22:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:29.273 07:22:02 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:29.530 07:22:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:29.530 07:22:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:29.530 07:22:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:29.530 07:22:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:29.530 07:22:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.530 07:22:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:29.530 07:22:03 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:29.788 07:22:03 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:29.788 07:22:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:29.788 07:22:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:29.788 07:22:03 -- bdev/nbd_common.sh@41 -- # break 00:21:29.788 07:22:03 -- bdev/nbd_common.sh@45 -- # return 0 00:21:29.788 07:22:03 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:21:29.788 07:22:03 -- bdev/bdev_raid.sh@709 -- # killprocess 129676 00:21:29.788 07:22:03 -- common/autotest_common.sh@924 -- # '[' -z 129676 ']' 00:21:29.788 07:22:03 -- common/autotest_common.sh@928 -- # kill -0 129676 00:21:29.788 07:22:03 -- common/autotest_common.sh@929 -- # uname 00:21:29.788 07:22:03 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:21:29.788 07:22:03 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 129676 00:21:29.788 killing process with pid 129676 00:21:29.788 Received shutdown signal, test time was about 60.000000 seconds 00:21:29.788 00:21:29.788 Latency(us) 00:21:29.788 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.788 =================================================================================================================== 00:21:29.789 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:29.789 07:22:03 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:21:29.789 07:22:03 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:21:29.789 07:22:03 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 129676' 00:21:29.789 07:22:03 -- common/autotest_common.sh@943 -- # kill 129676 00:21:29.789 07:22:03 -- common/autotest_common.sh@948 -- # wait 129676 00:21:29.789 [2024-02-13 07:22:03.288869] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:30.047 [2024-02-13 07:22:03.613224] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:30.982 ************************************ 00:21:30.982 END TEST raid_rebuild_test 00:21:30.982 ************************************ 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:30.982 00:21:30.982 real 0m22.516s 00:21:30.982 user 0m31.660s 00:21:30.982 sys 0m3.600s 00:21:30.982 07:22:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:30.982 07:22:04 -- common/autotest_common.sh@10 -- # set +x 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:21:30.982 07:22:04 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:21:30.982 07:22:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:21:30.982 07:22:04 -- common/autotest_common.sh@10 -- # set +x 00:21:30.982 ************************************ 00:21:30.982 START TEST raid_rebuild_test_sb 00:21:30.982 ************************************ 00:21:30.982 07:22:04 -- common/autotest_common.sh@1102 -- # raid_rebuild_test raid1 4 true false 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@544 -- # raid_pid=130273 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@545 -- # waitforlisten 130273 /var/tmp/spdk-raid.sock 00:21:30.982 07:22:04 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:30.982 07:22:04 -- common/autotest_common.sh@817 -- # '[' -z 130273 ']' 00:21:30.982 07:22:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:30.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:30.982 07:22:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:30.982 07:22:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:30.982 07:22:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:30.982 07:22:04 -- common/autotest_common.sh@10 -- # set +x 00:21:31.240 [2024-02-13 07:22:04.714708] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:21:31.240 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:31.240 Zero copy mechanism will not be used. 00:21:31.240 [2024-02-13 07:22:04.714926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130273 ] 00:21:31.240 [2024-02-13 07:22:04.874729] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.498 [2024-02-13 07:22:05.043886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.756 [2024-02-13 07:22:05.218173] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:32.014 07:22:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:32.014 07:22:05 -- common/autotest_common.sh@850 -- # return 0 00:21:32.014 07:22:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:32.014 07:22:05 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:32.014 07:22:05 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:21:32.273 BaseBdev1_malloc 00:21:32.273 07:22:05 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:32.531 [2024-02-13 07:22:05.974106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:32.531 [2024-02-13 07:22:05.974228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.531 [2024-02-13 07:22:05.974261] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:21:32.531 [2024-02-13 07:22:05.974308] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.531 [2024-02-13 07:22:05.976395] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.531 [2024-02-13 07:22:05.976446] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:32.531 BaseBdev1 00:21:32.531 07:22:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:32.531 07:22:05 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:32.531 07:22:05 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:21:32.531 BaseBdev2_malloc 00:21:32.531 07:22:06 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:21:32.790 [2024-02-13 07:22:06.387825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:21:32.790 [2024-02-13 07:22:06.387934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:32.790 [2024-02-13 07:22:06.387978] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:21:32.790 [2024-02-13 07:22:06.388030] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:32.790 [2024-02-13 07:22:06.390115] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:32.790 [2024-02-13 07:22:06.390179] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:21:32.790 BaseBdev2 00:21:32.790 07:22:06 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:32.790 07:22:06 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:32.790 07:22:06 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:21:33.049 BaseBdev3_malloc 00:21:33.049 07:22:06 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:33.307 [2024-02-13 07:22:06.799874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:33.307 [2024-02-13 07:22:06.799965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.307 [2024-02-13 07:22:06.800020] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:21:33.307 [2024-02-13 07:22:06.800061] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.307 [2024-02-13 07:22:06.802065] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.307 [2024-02-13 07:22:06.802131] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:33.307 BaseBdev3 00:21:33.307 07:22:06 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:33.307 07:22:06 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:21:33.307 07:22:06 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:21:33.566 BaseBdev4_malloc 00:21:33.566 07:22:07 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:33.566 [2024-02-13 07:22:07.205931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:33.566 [2024-02-13 07:22:07.206048] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:33.566 [2024-02-13 07:22:07.206084] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:21:33.566 [2024-02-13 07:22:07.206129] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:33.566 [2024-02-13 07:22:07.208116] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:33.566 [2024-02-13 07:22:07.208201] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:33.566 BaseBdev4 00:21:33.566 07:22:07 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:21:33.824 spare_malloc 00:21:33.824 07:22:07 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:21:34.083 spare_delay 00:21:34.083 07:22:07 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:34.342 [2024-02-13 07:22:07.846294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:34.342 [2024-02-13 07:22:07.846381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:34.342 [2024-02-13 07:22:07.846411] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:21:34.342 [2024-02-13 07:22:07.846449] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:34.342 [2024-02-13 07:22:07.848543] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:34.342 [2024-02-13 07:22:07.848602] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:34.342 spare 00:21:34.342 07:22:07 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:21:34.342 [2024-02-13 07:22:08.026431] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:34.342 [2024-02-13 07:22:08.027909] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:34.342 [2024-02-13 07:22:08.027980] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:34.342 [2024-02-13 07:22:08.028034] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:34.342 [2024-02-13 07:22:08.028262] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:21:34.342 [2024-02-13 07:22:08.028277] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:34.342 [2024-02-13 07:22:08.028384] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:21:34.342 [2024-02-13 07:22:08.028710] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:21:34.342 [2024-02-13 07:22:08.028724] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:21:34.342 [2024-02-13 07:22:08.028835] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:34.600 07:22:08 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:21:34.600 07:22:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:34.600 07:22:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:34.600 07:22:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:34.600 07:22:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:34.600 07:22:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:21:34.600 07:22:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:34.601 07:22:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:34.601 07:22:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:34.601 07:22:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:34.601 07:22:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:34.601 07:22:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:34.601 07:22:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:34.601 "name": "raid_bdev1", 00:21:34.601 "uuid": "a56de49c-448c-4edc-814d-34cff7f1e04b", 00:21:34.601 "strip_size_kb": 0, 00:21:34.601 "state": "online", 00:21:34.601 "raid_level": "raid1", 00:21:34.601 "superblock": true, 00:21:34.601 "num_base_bdevs": 4, 00:21:34.601 "num_base_bdevs_discovered": 4, 00:21:34.601 "num_base_bdevs_operational": 4, 00:21:34.601 "base_bdevs_list": [ 00:21:34.601 { 00:21:34.601 "name": "BaseBdev1", 00:21:34.601 "uuid": "4f490d15-8e7d-50d0-b93d-71a03e16d983", 00:21:34.601 "is_configured": true, 00:21:34.601 "data_offset": 2048, 00:21:34.601 "data_size": 63488 00:21:34.601 }, 00:21:34.601 { 00:21:34.601 "name": "BaseBdev2", 00:21:34.601 "uuid": "7be076f9-b3e9-5652-bc1e-01da5e742d5b", 00:21:34.601 "is_configured": true, 00:21:34.601 "data_offset": 2048, 00:21:34.601 "data_size": 63488 00:21:34.601 }, 00:21:34.601 { 00:21:34.601 "name": "BaseBdev3", 00:21:34.601 "uuid": "04d3f06c-8ac9-52f7-9a33-75a213b6e543", 00:21:34.601 "is_configured": true, 00:21:34.601 "data_offset": 2048, 00:21:34.601 "data_size": 63488 00:21:34.601 }, 00:21:34.601 { 00:21:34.601 "name": "BaseBdev4", 00:21:34.601 "uuid": "fad1501e-1d42-5131-b89d-8f0bbf9d1f1e", 00:21:34.601 "is_configured": true, 00:21:34.601 "data_offset": 2048, 00:21:34.601 "data_size": 63488 00:21:34.601 } 00:21:34.601 ] 00:21:34.601 }' 00:21:34.601 07:22:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:34.601 07:22:08 -- common/autotest_common.sh@10 -- # set +x 00:21:35.534 07:22:08 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:21:35.534 07:22:08 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:21:35.534 [2024-02-13 07:22:09.138735] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:21:35.534 07:22:09 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:21:35.535 07:22:09 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:35.535 07:22:09 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:21:35.792 07:22:09 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:21:35.792 07:22:09 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:21:35.792 07:22:09 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:21:35.792 07:22:09 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:21:35.792 07:22:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:35.792 07:22:09 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:35.792 07:22:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:35.792 07:22:09 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:35.792 07:22:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:35.792 07:22:09 -- bdev/nbd_common.sh@12 -- # local i 00:21:35.792 07:22:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:35.792 07:22:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:35.792 07:22:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:21:36.050 [2024-02-13 07:22:09.566722] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:21:36.050 /dev/nbd0 00:21:36.050 07:22:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:36.050 07:22:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:36.050 07:22:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:21:36.050 07:22:09 -- common/autotest_common.sh@855 -- # local i 00:21:36.050 07:22:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:36.050 07:22:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:36.050 07:22:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:21:36.050 07:22:09 -- common/autotest_common.sh@859 -- # break 00:21:36.050 07:22:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:36.050 07:22:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:36.050 07:22:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:36.050 1+0 records in 00:21:36.050 1+0 records out 00:21:36.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023596 s, 17.4 MB/s 00:21:36.050 07:22:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.050 07:22:09 -- common/autotest_common.sh@872 -- # size=4096 00:21:36.050 07:22:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.050 07:22:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:36.050 07:22:09 -- common/autotest_common.sh@875 -- # return 0 00:21:36.050 07:22:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:36.050 07:22:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:36.050 07:22:09 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:21:36.050 07:22:09 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:21:36.050 07:22:09 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:21:42.627 63488+0 records in 00:21:42.627 63488+0 records out 00:21:42.627 32505856 bytes (33 MB, 31 MiB) copied, 6.00511 s, 5.4 MB/s 00:21:42.627 07:22:15 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@51 -- # local i 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:42.627 [2024-02-13 07:22:15.848884] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@41 -- # break 00:21:42.627 07:22:15 -- bdev/nbd_common.sh@45 -- # return 0 00:21:42.627 07:22:15 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:21:42.627 [2024-02-13 07:22:16.128509] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:21:42.627 07:22:16 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:42.627 07:22:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:42.627 07:22:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:42.627 07:22:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:42.627 07:22:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:42.627 07:22:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:42.627 07:22:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:42.627 07:22:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:42.627 07:22:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:42.627 07:22:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:42.627 07:22:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:42.627 07:22:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:42.887 07:22:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:42.887 "name": "raid_bdev1", 00:21:42.887 "uuid": "a56de49c-448c-4edc-814d-34cff7f1e04b", 00:21:42.887 "strip_size_kb": 0, 00:21:42.887 "state": "online", 00:21:42.887 "raid_level": "raid1", 00:21:42.887 "superblock": true, 00:21:42.887 "num_base_bdevs": 4, 00:21:42.887 "num_base_bdevs_discovered": 3, 00:21:42.887 "num_base_bdevs_operational": 3, 00:21:42.887 "base_bdevs_list": [ 00:21:42.887 { 00:21:42.887 "name": null, 00:21:42.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:42.887 "is_configured": false, 00:21:42.887 "data_offset": 2048, 00:21:42.887 "data_size": 63488 00:21:42.887 }, 00:21:42.887 { 00:21:42.887 "name": "BaseBdev2", 00:21:42.887 "uuid": "7be076f9-b3e9-5652-bc1e-01da5e742d5b", 00:21:42.887 "is_configured": true, 00:21:42.887 "data_offset": 2048, 00:21:42.887 "data_size": 63488 00:21:42.887 }, 00:21:42.887 { 00:21:42.887 "name": "BaseBdev3", 00:21:42.887 "uuid": "04d3f06c-8ac9-52f7-9a33-75a213b6e543", 00:21:42.887 "is_configured": true, 00:21:42.887 "data_offset": 2048, 00:21:42.887 "data_size": 63488 00:21:42.887 }, 00:21:42.887 { 00:21:42.887 "name": "BaseBdev4", 00:21:42.887 "uuid": "fad1501e-1d42-5131-b89d-8f0bbf9d1f1e", 00:21:42.887 "is_configured": true, 00:21:42.887 "data_offset": 2048, 00:21:42.887 "data_size": 63488 00:21:42.887 } 00:21:42.887 ] 00:21:42.887 }' 00:21:42.887 07:22:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:42.887 07:22:16 -- common/autotest_common.sh@10 -- # set +x 00:21:43.455 07:22:17 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:43.714 [2024-02-13 07:22:17.256740] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:43.714 [2024-02-13 07:22:17.256776] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:43.714 [2024-02-13 07:22:17.266836] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5170 00:21:43.714 [2024-02-13 07:22:17.268488] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:43.714 07:22:17 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:21:44.650 07:22:18 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:44.651 07:22:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:44.651 07:22:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:44.651 07:22:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:44.651 07:22:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:44.651 07:22:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:44.651 07:22:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:44.909 07:22:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:44.909 "name": "raid_bdev1", 00:21:44.909 "uuid": "a56de49c-448c-4edc-814d-34cff7f1e04b", 00:21:44.909 "strip_size_kb": 0, 00:21:44.909 "state": "online", 00:21:44.909 "raid_level": "raid1", 00:21:44.909 "superblock": true, 00:21:44.909 "num_base_bdevs": 4, 00:21:44.909 "num_base_bdevs_discovered": 4, 00:21:44.909 "num_base_bdevs_operational": 4, 00:21:44.909 "process": { 00:21:44.909 "type": "rebuild", 00:21:44.909 "target": "spare", 00:21:44.909 "progress": { 00:21:44.909 "blocks": 24576, 00:21:44.909 "percent": 38 00:21:44.909 } 00:21:44.909 }, 00:21:44.909 "base_bdevs_list": [ 00:21:44.909 { 00:21:44.909 "name": "spare", 00:21:44.909 "uuid": "2ecceec9-9f80-56e6-b2f6-685a272f6ed4", 00:21:44.909 "is_configured": true, 00:21:44.909 "data_offset": 2048, 00:21:44.909 "data_size": 63488 00:21:44.909 }, 00:21:44.909 { 00:21:44.909 "name": "BaseBdev2", 00:21:44.909 "uuid": "7be076f9-b3e9-5652-bc1e-01da5e742d5b", 00:21:44.909 "is_configured": true, 00:21:44.909 "data_offset": 2048, 00:21:44.909 "data_size": 63488 00:21:44.909 }, 00:21:44.909 { 00:21:44.910 "name": "BaseBdev3", 00:21:44.910 "uuid": "04d3f06c-8ac9-52f7-9a33-75a213b6e543", 00:21:44.910 "is_configured": true, 00:21:44.910 "data_offset": 2048, 00:21:44.910 "data_size": 63488 00:21:44.910 }, 00:21:44.910 { 00:21:44.910 "name": "BaseBdev4", 00:21:44.910 "uuid": "fad1501e-1d42-5131-b89d-8f0bbf9d1f1e", 00:21:44.910 "is_configured": true, 00:21:44.910 "data_offset": 2048, 00:21:44.910 "data_size": 63488 00:21:44.910 } 00:21:44.910 ] 00:21:44.910 }' 00:21:44.910 07:22:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:44.910 07:22:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:44.910 07:22:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:44.910 07:22:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:44.910 07:22:18 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:21:45.169 [2024-02-13 07:22:18.819031] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:45.428 [2024-02-13 07:22:18.877691] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:45.428 [2024-02-13 07:22:18.877788] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:45.428 07:22:18 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:45.428 07:22:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:45.428 07:22:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:45.428 07:22:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:45.428 07:22:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:45.428 07:22:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:45.428 07:22:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:45.428 07:22:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:45.428 07:22:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:45.428 07:22:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:45.428 07:22:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:45.428 07:22:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:45.428 07:22:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:45.428 "name": "raid_bdev1", 00:21:45.428 "uuid": "a56de49c-448c-4edc-814d-34cff7f1e04b", 00:21:45.428 "strip_size_kb": 0, 00:21:45.428 "state": "online", 00:21:45.428 "raid_level": "raid1", 00:21:45.428 "superblock": true, 00:21:45.428 "num_base_bdevs": 4, 00:21:45.428 "num_base_bdevs_discovered": 3, 00:21:45.428 "num_base_bdevs_operational": 3, 00:21:45.428 "base_bdevs_list": [ 00:21:45.428 { 00:21:45.428 "name": null, 00:21:45.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:45.428 "is_configured": false, 00:21:45.428 "data_offset": 2048, 00:21:45.428 "data_size": 63488 00:21:45.428 }, 00:21:45.428 { 00:21:45.428 "name": "BaseBdev2", 00:21:45.428 "uuid": "7be076f9-b3e9-5652-bc1e-01da5e742d5b", 00:21:45.428 "is_configured": true, 00:21:45.428 "data_offset": 2048, 00:21:45.428 "data_size": 63488 00:21:45.428 }, 00:21:45.428 { 00:21:45.428 "name": "BaseBdev3", 00:21:45.428 "uuid": "04d3f06c-8ac9-52f7-9a33-75a213b6e543", 00:21:45.428 "is_configured": true, 00:21:45.428 "data_offset": 2048, 00:21:45.428 "data_size": 63488 00:21:45.428 }, 00:21:45.428 { 00:21:45.428 "name": "BaseBdev4", 00:21:45.428 "uuid": "fad1501e-1d42-5131-b89d-8f0bbf9d1f1e", 00:21:45.428 "is_configured": true, 00:21:45.428 "data_offset": 2048, 00:21:45.428 "data_size": 63488 00:21:45.428 } 00:21:45.428 ] 00:21:45.428 }' 00:21:45.428 07:22:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:45.428 07:22:19 -- common/autotest_common.sh@10 -- # set +x 00:21:46.366 07:22:19 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:46.366 07:22:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:46.366 07:22:19 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:46.366 07:22:19 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:46.366 07:22:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:46.366 07:22:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:46.366 07:22:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:46.366 07:22:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:46.366 "name": "raid_bdev1", 00:21:46.366 "uuid": "a56de49c-448c-4edc-814d-34cff7f1e04b", 00:21:46.366 "strip_size_kb": 0, 00:21:46.366 "state": "online", 00:21:46.366 "raid_level": "raid1", 00:21:46.366 "superblock": true, 00:21:46.366 "num_base_bdevs": 4, 00:21:46.366 "num_base_bdevs_discovered": 3, 00:21:46.366 "num_base_bdevs_operational": 3, 00:21:46.366 "base_bdevs_list": [ 00:21:46.366 { 00:21:46.366 "name": null, 00:21:46.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:46.366 "is_configured": false, 00:21:46.366 "data_offset": 2048, 00:21:46.366 "data_size": 63488 00:21:46.366 }, 00:21:46.366 { 00:21:46.366 "name": "BaseBdev2", 00:21:46.366 "uuid": "7be076f9-b3e9-5652-bc1e-01da5e742d5b", 00:21:46.366 "is_configured": true, 00:21:46.366 "data_offset": 2048, 00:21:46.366 "data_size": 63488 00:21:46.366 }, 00:21:46.366 { 00:21:46.366 "name": "BaseBdev3", 00:21:46.366 "uuid": "04d3f06c-8ac9-52f7-9a33-75a213b6e543", 00:21:46.366 "is_configured": true, 00:21:46.366 "data_offset": 2048, 00:21:46.366 "data_size": 63488 00:21:46.366 }, 00:21:46.366 { 00:21:46.366 "name": "BaseBdev4", 00:21:46.366 "uuid": "fad1501e-1d42-5131-b89d-8f0bbf9d1f1e", 00:21:46.366 "is_configured": true, 00:21:46.366 "data_offset": 2048, 00:21:46.366 "data_size": 63488 00:21:46.366 } 00:21:46.366 ] 00:21:46.366 }' 00:21:46.366 07:22:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:46.625 07:22:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:46.625 07:22:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:46.625 07:22:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:46.625 07:22:20 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:21:46.625 [2024-02-13 07:22:20.318816] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:21:46.625 [2024-02-13 07:22:20.318874] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:46.884 [2024-02-13 07:22:20.329248] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca5310 00:21:46.884 [2024-02-13 07:22:20.331071] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:46.884 07:22:20 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:21:47.820 07:22:21 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:47.820 07:22:21 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:47.820 07:22:21 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:47.820 07:22:21 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:47.820 07:22:21 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:47.820 07:22:21 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:47.820 07:22:21 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.080 07:22:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:48.080 "name": "raid_bdev1", 00:21:48.080 "uuid": "a56de49c-448c-4edc-814d-34cff7f1e04b", 00:21:48.080 "strip_size_kb": 0, 00:21:48.080 "state": "online", 00:21:48.080 "raid_level": "raid1", 00:21:48.080 "superblock": true, 00:21:48.080 "num_base_bdevs": 4, 00:21:48.080 "num_base_bdevs_discovered": 4, 00:21:48.080 "num_base_bdevs_operational": 4, 00:21:48.080 "process": { 00:21:48.080 "type": "rebuild", 00:21:48.080 "target": "spare", 00:21:48.080 "progress": { 00:21:48.080 "blocks": 24576, 00:21:48.080 "percent": 38 00:21:48.080 } 00:21:48.080 }, 00:21:48.080 "base_bdevs_list": [ 00:21:48.080 { 00:21:48.080 "name": "spare", 00:21:48.080 "uuid": "2ecceec9-9f80-56e6-b2f6-685a272f6ed4", 00:21:48.080 "is_configured": true, 00:21:48.080 "data_offset": 2048, 00:21:48.080 "data_size": 63488 00:21:48.080 }, 00:21:48.080 { 00:21:48.080 "name": "BaseBdev2", 00:21:48.080 "uuid": "7be076f9-b3e9-5652-bc1e-01da5e742d5b", 00:21:48.080 "is_configured": true, 00:21:48.080 "data_offset": 2048, 00:21:48.080 "data_size": 63488 00:21:48.080 }, 00:21:48.080 { 00:21:48.080 "name": "BaseBdev3", 00:21:48.080 "uuid": "04d3f06c-8ac9-52f7-9a33-75a213b6e543", 00:21:48.080 "is_configured": true, 00:21:48.080 "data_offset": 2048, 00:21:48.080 "data_size": 63488 00:21:48.080 }, 00:21:48.080 { 00:21:48.080 "name": "BaseBdev4", 00:21:48.080 "uuid": "fad1501e-1d42-5131-b89d-8f0bbf9d1f1e", 00:21:48.080 "is_configured": true, 00:21:48.080 "data_offset": 2048, 00:21:48.080 "data_size": 63488 00:21:48.080 } 00:21:48.080 ] 00:21:48.080 }' 00:21:48.080 07:22:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:48.080 07:22:21 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:48.080 07:22:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:48.080 07:22:21 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:48.080 07:22:21 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:21:48.080 07:22:21 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:21:48.080 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:21:48.080 07:22:21 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:21:48.080 07:22:21 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:21:48.080 07:22:21 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:21:48.080 07:22:21 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:21:48.339 [2024-02-13 07:22:21.921868] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:21:48.339 [2024-02-13 07:22:21.939959] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca5310 00:21:48.597 07:22:22 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:21:48.597 07:22:22 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:21:48.597 07:22:22 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:48.597 07:22:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:48.597 07:22:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:48.597 07:22:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:48.597 07:22:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:48.597 07:22:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.597 07:22:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:48.856 07:22:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:48.856 "name": "raid_bdev1", 00:21:48.856 "uuid": "a56de49c-448c-4edc-814d-34cff7f1e04b", 00:21:48.856 "strip_size_kb": 0, 00:21:48.856 "state": "online", 00:21:48.856 "raid_level": "raid1", 00:21:48.856 "superblock": true, 00:21:48.856 "num_base_bdevs": 4, 00:21:48.856 "num_base_bdevs_discovered": 3, 00:21:48.856 "num_base_bdevs_operational": 3, 00:21:48.856 "process": { 00:21:48.856 "type": "rebuild", 00:21:48.856 "target": "spare", 00:21:48.856 "progress": { 00:21:48.856 "blocks": 38912, 00:21:48.856 "percent": 61 00:21:48.856 } 00:21:48.856 }, 00:21:48.856 "base_bdevs_list": [ 00:21:48.856 { 00:21:48.856 "name": "spare", 00:21:48.856 "uuid": "2ecceec9-9f80-56e6-b2f6-685a272f6ed4", 00:21:48.856 "is_configured": true, 00:21:48.856 "data_offset": 2048, 00:21:48.856 "data_size": 63488 00:21:48.856 }, 00:21:48.856 { 00:21:48.856 "name": null, 00:21:48.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:48.856 "is_configured": false, 00:21:48.856 "data_offset": 2048, 00:21:48.856 "data_size": 63488 00:21:48.856 }, 00:21:48.856 { 00:21:48.856 "name": "BaseBdev3", 00:21:48.856 "uuid": "04d3f06c-8ac9-52f7-9a33-75a213b6e543", 00:21:48.856 "is_configured": true, 00:21:48.856 "data_offset": 2048, 00:21:48.856 "data_size": 63488 00:21:48.856 }, 00:21:48.856 { 00:21:48.856 "name": "BaseBdev4", 00:21:48.856 "uuid": "fad1501e-1d42-5131-b89d-8f0bbf9d1f1e", 00:21:48.856 "is_configured": true, 00:21:48.856 "data_offset": 2048, 00:21:48.856 "data_size": 63488 00:21:48.856 } 00:21:48.856 ] 00:21:48.856 }' 00:21:48.856 07:22:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:48.856 07:22:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:48.856 07:22:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:48.856 07:22:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:48.856 07:22:22 -- bdev/bdev_raid.sh@657 -- # local timeout=523 00:21:48.856 07:22:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:48.856 07:22:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:48.856 07:22:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:48.856 07:22:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:48.856 07:22:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:48.856 07:22:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:48.856 07:22:22 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:48.856 07:22:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:49.115 07:22:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:49.115 "name": "raid_bdev1", 00:21:49.115 "uuid": "a56de49c-448c-4edc-814d-34cff7f1e04b", 00:21:49.115 "strip_size_kb": 0, 00:21:49.115 "state": "online", 00:21:49.115 "raid_level": "raid1", 00:21:49.115 "superblock": true, 00:21:49.115 "num_base_bdevs": 4, 00:21:49.115 "num_base_bdevs_discovered": 3, 00:21:49.115 "num_base_bdevs_operational": 3, 00:21:49.115 "process": { 00:21:49.115 "type": "rebuild", 00:21:49.115 "target": "spare", 00:21:49.115 "progress": { 00:21:49.115 "blocks": 45056, 00:21:49.115 "percent": 70 00:21:49.115 } 00:21:49.115 }, 00:21:49.115 "base_bdevs_list": [ 00:21:49.115 { 00:21:49.115 "name": "spare", 00:21:49.115 "uuid": "2ecceec9-9f80-56e6-b2f6-685a272f6ed4", 00:21:49.115 "is_configured": true, 00:21:49.115 "data_offset": 2048, 00:21:49.115 "data_size": 63488 00:21:49.115 }, 00:21:49.115 { 00:21:49.115 "name": null, 00:21:49.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:49.115 "is_configured": false, 00:21:49.115 "data_offset": 2048, 00:21:49.115 "data_size": 63488 00:21:49.115 }, 00:21:49.115 { 00:21:49.115 "name": "BaseBdev3", 00:21:49.115 "uuid": "04d3f06c-8ac9-52f7-9a33-75a213b6e543", 00:21:49.115 "is_configured": true, 00:21:49.115 "data_offset": 2048, 00:21:49.115 "data_size": 63488 00:21:49.115 }, 00:21:49.115 { 00:21:49.115 "name": "BaseBdev4", 00:21:49.115 "uuid": "fad1501e-1d42-5131-b89d-8f0bbf9d1f1e", 00:21:49.115 "is_configured": true, 00:21:49.115 "data_offset": 2048, 00:21:49.115 "data_size": 63488 00:21:49.115 } 00:21:49.115 ] 00:21:49.115 }' 00:21:49.115 07:22:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:49.115 07:22:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:49.115 07:22:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:49.115 07:22:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:21:49.115 07:22:22 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:21:50.050 [2024-02-13 07:22:23.450303] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:50.050 [2024-02-13 07:22:23.450414] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:50.050 [2024-02-13 07:22:23.450625] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:50.050 07:22:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:21:50.050 07:22:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:50.050 07:22:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:50.050 07:22:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:21:50.050 07:22:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:21:50.050 07:22:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:50.050 07:22:23 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.050 07:22:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.308 07:22:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:50.308 "name": "raid_bdev1", 00:21:50.308 "uuid": "a56de49c-448c-4edc-814d-34cff7f1e04b", 00:21:50.308 "strip_size_kb": 0, 00:21:50.308 "state": "online", 00:21:50.308 "raid_level": "raid1", 00:21:50.308 "superblock": true, 00:21:50.308 "num_base_bdevs": 4, 00:21:50.308 "num_base_bdevs_discovered": 3, 00:21:50.308 "num_base_bdevs_operational": 3, 00:21:50.308 "base_bdevs_list": [ 00:21:50.308 { 00:21:50.308 "name": "spare", 00:21:50.308 "uuid": "2ecceec9-9f80-56e6-b2f6-685a272f6ed4", 00:21:50.308 "is_configured": true, 00:21:50.308 "data_offset": 2048, 00:21:50.308 "data_size": 63488 00:21:50.308 }, 00:21:50.308 { 00:21:50.308 "name": null, 00:21:50.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.308 "is_configured": false, 00:21:50.308 "data_offset": 2048, 00:21:50.308 "data_size": 63488 00:21:50.308 }, 00:21:50.308 { 00:21:50.308 "name": "BaseBdev3", 00:21:50.308 "uuid": "04d3f06c-8ac9-52f7-9a33-75a213b6e543", 00:21:50.308 "is_configured": true, 00:21:50.308 "data_offset": 2048, 00:21:50.308 "data_size": 63488 00:21:50.308 }, 00:21:50.308 { 00:21:50.308 "name": "BaseBdev4", 00:21:50.308 "uuid": "fad1501e-1d42-5131-b89d-8f0bbf9d1f1e", 00:21:50.308 "is_configured": true, 00:21:50.308 "data_offset": 2048, 00:21:50.308 "data_size": 63488 00:21:50.308 } 00:21:50.308 ] 00:21:50.308 }' 00:21:50.308 07:22:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:50.308 07:22:23 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:50.308 07:22:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:50.567 07:22:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:21:50.567 07:22:24 -- bdev/bdev_raid.sh@660 -- # break 00:21:50.567 07:22:24 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:50.567 07:22:24 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:50.567 07:22:24 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:50.567 07:22:24 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:50.567 07:22:24 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:50.567 07:22:24 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.567 07:22:24 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:50.839 07:22:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:50.839 "name": "raid_bdev1", 00:21:50.839 "uuid": "a56de49c-448c-4edc-814d-34cff7f1e04b", 00:21:50.839 "strip_size_kb": 0, 00:21:50.839 "state": "online", 00:21:50.839 "raid_level": "raid1", 00:21:50.839 "superblock": true, 00:21:50.839 "num_base_bdevs": 4, 00:21:50.839 "num_base_bdevs_discovered": 3, 00:21:50.839 "num_base_bdevs_operational": 3, 00:21:50.839 "base_bdevs_list": [ 00:21:50.839 { 00:21:50.839 "name": "spare", 00:21:50.839 "uuid": "2ecceec9-9f80-56e6-b2f6-685a272f6ed4", 00:21:50.839 "is_configured": true, 00:21:50.839 "data_offset": 2048, 00:21:50.839 "data_size": 63488 00:21:50.839 }, 00:21:50.839 { 00:21:50.839 "name": null, 00:21:50.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:50.839 "is_configured": false, 00:21:50.839 "data_offset": 2048, 00:21:50.839 "data_size": 63488 00:21:50.839 }, 00:21:50.839 { 00:21:50.839 "name": "BaseBdev3", 00:21:50.839 "uuid": "04d3f06c-8ac9-52f7-9a33-75a213b6e543", 00:21:50.839 "is_configured": true, 00:21:50.839 "data_offset": 2048, 00:21:50.839 "data_size": 63488 00:21:50.840 }, 00:21:50.840 { 00:21:50.840 "name": "BaseBdev4", 00:21:50.840 "uuid": "fad1501e-1d42-5131-b89d-8f0bbf9d1f1e", 00:21:50.840 "is_configured": true, 00:21:50.840 "data_offset": 2048, 00:21:50.840 "data_size": 63488 00:21:50.840 } 00:21:50.840 ] 00:21:50.840 }' 00:21:50.840 07:22:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:50.840 07:22:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:50.840 07:22:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:50.840 07:22:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:50.840 07:22:24 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:50.840 07:22:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:50.840 07:22:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:50.840 07:22:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:50.840 07:22:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:50.840 07:22:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:50.840 07:22:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:50.840 07:22:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:50.840 07:22:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:50.840 07:22:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:50.840 07:22:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:50.840 07:22:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:51.123 07:22:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:51.123 "name": "raid_bdev1", 00:21:51.123 "uuid": "a56de49c-448c-4edc-814d-34cff7f1e04b", 00:21:51.123 "strip_size_kb": 0, 00:21:51.123 "state": "online", 00:21:51.123 "raid_level": "raid1", 00:21:51.123 "superblock": true, 00:21:51.123 "num_base_bdevs": 4, 00:21:51.123 "num_base_bdevs_discovered": 3, 00:21:51.123 "num_base_bdevs_operational": 3, 00:21:51.123 "base_bdevs_list": [ 00:21:51.123 { 00:21:51.123 "name": "spare", 00:21:51.123 "uuid": "2ecceec9-9f80-56e6-b2f6-685a272f6ed4", 00:21:51.123 "is_configured": true, 00:21:51.123 "data_offset": 2048, 00:21:51.123 "data_size": 63488 00:21:51.123 }, 00:21:51.123 { 00:21:51.123 "name": null, 00:21:51.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:51.123 "is_configured": false, 00:21:51.123 "data_offset": 2048, 00:21:51.123 "data_size": 63488 00:21:51.123 }, 00:21:51.123 { 00:21:51.123 "name": "BaseBdev3", 00:21:51.123 "uuid": "04d3f06c-8ac9-52f7-9a33-75a213b6e543", 00:21:51.123 "is_configured": true, 00:21:51.123 "data_offset": 2048, 00:21:51.123 "data_size": 63488 00:21:51.123 }, 00:21:51.123 { 00:21:51.123 "name": "BaseBdev4", 00:21:51.123 "uuid": "fad1501e-1d42-5131-b89d-8f0bbf9d1f1e", 00:21:51.123 "is_configured": true, 00:21:51.123 "data_offset": 2048, 00:21:51.123 "data_size": 63488 00:21:51.123 } 00:21:51.123 ] 00:21:51.123 }' 00:21:51.123 07:22:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:51.123 07:22:24 -- common/autotest_common.sh@10 -- # set +x 00:21:51.759 07:22:25 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:21:52.017 [2024-02-13 07:22:25.531085] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:52.017 [2024-02-13 07:22:25.531154] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:52.017 [2024-02-13 07:22:25.531269] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:52.017 [2024-02-13 07:22:25.531383] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:52.017 [2024-02-13 07:22:25.531412] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:21:52.017 07:22:25 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:52.017 07:22:25 -- bdev/bdev_raid.sh@671 -- # jq length 00:21:52.276 07:22:25 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:21:52.276 07:22:25 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:21:52.276 07:22:25 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:21:52.276 07:22:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:52.276 07:22:25 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:21:52.276 07:22:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:52.276 07:22:25 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:21:52.276 07:22:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:52.276 07:22:25 -- bdev/nbd_common.sh@12 -- # local i 00:21:52.276 07:22:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:52.276 07:22:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:52.276 07:22:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:21:52.536 /dev/nbd0 00:21:52.536 07:22:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:52.536 07:22:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:52.536 07:22:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:21:52.536 07:22:26 -- common/autotest_common.sh@855 -- # local i 00:21:52.536 07:22:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:52.536 07:22:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:52.536 07:22:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:21:52.536 07:22:26 -- common/autotest_common.sh@859 -- # break 00:21:52.536 07:22:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:52.536 07:22:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:52.536 07:22:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:52.536 1+0 records in 00:21:52.536 1+0 records out 00:21:52.536 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564143 s, 7.3 MB/s 00:21:52.536 07:22:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:52.536 07:22:26 -- common/autotest_common.sh@872 -- # size=4096 00:21:52.536 07:22:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:52.536 07:22:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:52.536 07:22:26 -- common/autotest_common.sh@875 -- # return 0 00:21:52.536 07:22:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:52.536 07:22:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:52.536 07:22:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:21:52.794 /dev/nbd1 00:21:52.794 07:22:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:52.794 07:22:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:52.794 07:22:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:21:52.794 07:22:26 -- common/autotest_common.sh@855 -- # local i 00:21:52.794 07:22:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:21:52.794 07:22:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:21:52.794 07:22:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:21:52.794 07:22:26 -- common/autotest_common.sh@859 -- # break 00:21:52.794 07:22:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:21:52.794 07:22:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:21:52.794 07:22:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:52.794 1+0 records in 00:21:52.794 1+0 records out 00:21:52.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417711 s, 9.8 MB/s 00:21:52.794 07:22:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:52.794 07:22:26 -- common/autotest_common.sh@872 -- # size=4096 00:21:52.794 07:22:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:52.794 07:22:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:21:52.794 07:22:26 -- common/autotest_common.sh@875 -- # return 0 00:21:52.794 07:22:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:52.794 07:22:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:52.794 07:22:26 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:21:53.053 07:22:26 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:21:53.053 07:22:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:21:53.053 07:22:26 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:21:53.053 07:22:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:53.053 07:22:26 -- bdev/nbd_common.sh@51 -- # local i 00:21:53.053 07:22:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:53.053 07:22:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:21:53.312 07:22:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:53.312 07:22:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:53.312 07:22:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:53.312 07:22:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:53.312 07:22:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:53.312 07:22:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:53.312 07:22:26 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:53.312 07:22:26 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:53.312 07:22:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:53.312 07:22:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:53.312 07:22:26 -- bdev/nbd_common.sh@41 -- # break 00:21:53.312 07:22:26 -- bdev/nbd_common.sh@45 -- # return 0 00:21:53.312 07:22:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:53.312 07:22:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:21:53.571 07:22:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:53.571 07:22:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:53.571 07:22:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:53.571 07:22:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:53.571 07:22:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:53.571 07:22:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:53.571 07:22:27 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:21:53.571 07:22:27 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:21:53.571 07:22:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:53.571 07:22:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:53.571 07:22:27 -- bdev/nbd_common.sh@41 -- # break 00:21:53.571 07:22:27 -- bdev/nbd_common.sh@45 -- # return 0 00:21:53.571 07:22:27 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:21:53.571 07:22:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:53.571 07:22:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:21:53.571 07:22:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:21:53.831 07:22:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:54.090 [2024-02-13 07:22:27.748524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:54.090 [2024-02-13 07:22:27.748937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.090 [2024-02-13 07:22:27.749021] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:21:54.090 [2024-02-13 07:22:27.749210] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.090 [2024-02-13 07:22:27.751849] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.090 [2024-02-13 07:22:27.752043] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:54.090 [2024-02-13 07:22:27.752269] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:54.090 [2024-02-13 07:22:27.752436] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:54.090 BaseBdev1 00:21:54.090 07:22:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:54.090 07:22:27 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:21:54.090 07:22:27 -- bdev/bdev_raid.sh@696 -- # continue 00:21:54.090 07:22:27 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:54.090 07:22:27 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:21:54.090 07:22:27 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:21:54.348 07:22:27 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:21:54.607 [2024-02-13 07:22:28.180737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:21:54.607 [2024-02-13 07:22:28.181047] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:54.607 [2024-02-13 07:22:28.181147] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:21:54.607 [2024-02-13 07:22:28.181397] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:54.607 [2024-02-13 07:22:28.181970] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:54.607 [2024-02-13 07:22:28.182149] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:21:54.607 [2024-02-13 07:22:28.182361] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:21:54.607 [2024-02-13 07:22:28.182482] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:21:54.607 [2024-02-13 07:22:28.182572] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:54.607 [2024-02-13 07:22:28.182683] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:21:54.607 [2024-02-13 07:22:28.182849] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:21:54.607 BaseBdev3 00:21:54.607 07:22:28 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:21:54.607 07:22:28 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:21:54.607 07:22:28 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:21:54.866 07:22:28 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:21:55.125 [2024-02-13 07:22:28.576836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:21:55.125 [2024-02-13 07:22:28.577144] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.125 [2024-02-13 07:22:28.577224] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:21:55.125 [2024-02-13 07:22:28.577471] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.126 [2024-02-13 07:22:28.578084] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.126 [2024-02-13 07:22:28.578237] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:21:55.126 [2024-02-13 07:22:28.578438] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:21:55.126 [2024-02-13 07:22:28.578554] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:21:55.126 BaseBdev4 00:21:55.126 07:22:28 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:21:55.126 07:22:28 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:21:55.385 [2024-02-13 07:22:29.016950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:55.385 [2024-02-13 07:22:29.017241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:55.385 [2024-02-13 07:22:29.017327] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:21:55.385 [2024-02-13 07:22:29.017547] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:55.385 [2024-02-13 07:22:29.018189] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:55.385 [2024-02-13 07:22:29.018383] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:55.385 [2024-02-13 07:22:29.018611] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:21:55.385 [2024-02-13 07:22:29.018739] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:55.385 spare 00:21:55.385 07:22:29 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:21:55.385 07:22:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:21:55.385 07:22:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:21:55.385 07:22:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:21:55.385 07:22:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:21:55.385 07:22:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:21:55.385 07:22:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:21:55.385 07:22:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:21:55.385 07:22:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:21:55.385 07:22:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:21:55.385 07:22:29 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:55.385 07:22:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:55.644 [2024-02-13 07:22:29.118905] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:21:55.644 [2024-02-13 07:22:29.119060] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:21:55.644 [2024-02-13 07:22:29.119244] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc5f20 00:21:55.644 [2024-02-13 07:22:29.119835] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:21:55.644 [2024-02-13 07:22:29.120009] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:21:55.644 [2024-02-13 07:22:29.120251] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:55.644 07:22:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:21:55.644 "name": "raid_bdev1", 00:21:55.644 "uuid": "a56de49c-448c-4edc-814d-34cff7f1e04b", 00:21:55.644 "strip_size_kb": 0, 00:21:55.644 "state": "online", 00:21:55.644 "raid_level": "raid1", 00:21:55.644 "superblock": true, 00:21:55.644 "num_base_bdevs": 4, 00:21:55.644 "num_base_bdevs_discovered": 3, 00:21:55.644 "num_base_bdevs_operational": 3, 00:21:55.644 "base_bdevs_list": [ 00:21:55.644 { 00:21:55.644 "name": "spare", 00:21:55.644 "uuid": "2ecceec9-9f80-56e6-b2f6-685a272f6ed4", 00:21:55.644 "is_configured": true, 00:21:55.644 "data_offset": 2048, 00:21:55.644 "data_size": 63488 00:21:55.644 }, 00:21:55.644 { 00:21:55.644 "name": null, 00:21:55.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:55.644 "is_configured": false, 00:21:55.644 "data_offset": 2048, 00:21:55.644 "data_size": 63488 00:21:55.644 }, 00:21:55.644 { 00:21:55.644 "name": "BaseBdev3", 00:21:55.644 "uuid": "04d3f06c-8ac9-52f7-9a33-75a213b6e543", 00:21:55.644 "is_configured": true, 00:21:55.644 "data_offset": 2048, 00:21:55.644 "data_size": 63488 00:21:55.644 }, 00:21:55.644 { 00:21:55.644 "name": "BaseBdev4", 00:21:55.644 "uuid": "fad1501e-1d42-5131-b89d-8f0bbf9d1f1e", 00:21:55.644 "is_configured": true, 00:21:55.644 "data_offset": 2048, 00:21:55.644 "data_size": 63488 00:21:55.644 } 00:21:55.644 ] 00:21:55.644 }' 00:21:55.644 07:22:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:21:55.644 07:22:29 -- common/autotest_common.sh@10 -- # set +x 00:21:56.212 07:22:29 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:56.212 07:22:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:21:56.212 07:22:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:21:56.212 07:22:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:21:56.212 07:22:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:21:56.212 07:22:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.212 07:22:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:56.779 07:22:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:21:56.779 "name": "raid_bdev1", 00:21:56.779 "uuid": "a56de49c-448c-4edc-814d-34cff7f1e04b", 00:21:56.779 "strip_size_kb": 0, 00:21:56.779 "state": "online", 00:21:56.779 "raid_level": "raid1", 00:21:56.779 "superblock": true, 00:21:56.779 "num_base_bdevs": 4, 00:21:56.779 "num_base_bdevs_discovered": 3, 00:21:56.779 "num_base_bdevs_operational": 3, 00:21:56.779 "base_bdevs_list": [ 00:21:56.779 { 00:21:56.779 "name": "spare", 00:21:56.779 "uuid": "2ecceec9-9f80-56e6-b2f6-685a272f6ed4", 00:21:56.779 "is_configured": true, 00:21:56.779 "data_offset": 2048, 00:21:56.779 "data_size": 63488 00:21:56.779 }, 00:21:56.779 { 00:21:56.779 "name": null, 00:21:56.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:56.779 "is_configured": false, 00:21:56.779 "data_offset": 2048, 00:21:56.779 "data_size": 63488 00:21:56.779 }, 00:21:56.779 { 00:21:56.779 "name": "BaseBdev3", 00:21:56.779 "uuid": "04d3f06c-8ac9-52f7-9a33-75a213b6e543", 00:21:56.779 "is_configured": true, 00:21:56.779 "data_offset": 2048, 00:21:56.779 "data_size": 63488 00:21:56.779 }, 00:21:56.779 { 00:21:56.779 "name": "BaseBdev4", 00:21:56.779 "uuid": "fad1501e-1d42-5131-b89d-8f0bbf9d1f1e", 00:21:56.779 "is_configured": true, 00:21:56.779 "data_offset": 2048, 00:21:56.779 "data_size": 63488 00:21:56.779 } 00:21:56.779 ] 00:21:56.779 }' 00:21:56.779 07:22:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:21:56.779 07:22:30 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:21:56.779 07:22:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:21:56.779 07:22:30 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:21:56.779 07:22:30 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:21:56.779 07:22:30 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:57.038 07:22:30 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:21:57.038 07:22:30 -- bdev/bdev_raid.sh@709 -- # killprocess 130273 00:21:57.038 07:22:30 -- common/autotest_common.sh@924 -- # '[' -z 130273 ']' 00:21:57.038 07:22:30 -- common/autotest_common.sh@928 -- # kill -0 130273 00:21:57.038 07:22:30 -- common/autotest_common.sh@929 -- # uname 00:21:57.038 07:22:30 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:21:57.038 07:22:30 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 130273 00:21:57.038 killing process with pid 130273 00:21:57.038 Received shutdown signal, test time was about 60.000000 seconds 00:21:57.038 00:21:57.038 Latency(us) 00:21:57.038 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.038 =================================================================================================================== 00:21:57.039 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:57.039 07:22:30 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:21:57.039 07:22:30 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:21:57.039 07:22:30 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 130273' 00:21:57.039 07:22:30 -- common/autotest_common.sh@943 -- # kill 130273 00:21:57.039 07:22:30 -- common/autotest_common.sh@948 -- # wait 130273 00:21:57.039 [2024-02-13 07:22:30.546204] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:57.039 [2024-02-13 07:22:30.546323] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:57.039 [2024-02-13 07:22:30.546458] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:57.039 [2024-02-13 07:22:30.546514] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:21:57.297 [2024-02-13 07:22:30.899427] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:58.235 ************************************ 00:21:58.235 END TEST raid_rebuild_test_sb 00:21:58.235 ************************************ 00:21:58.235 07:22:31 -- bdev/bdev_raid.sh@711 -- # return 0 00:21:58.235 00:21:58.235 real 0m27.251s 00:21:58.235 user 0m39.874s 00:21:58.235 sys 0m4.130s 00:21:58.235 07:22:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:58.235 07:22:31 -- common/autotest_common.sh@10 -- # set +x 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:21:58.494 07:22:31 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:21:58.494 07:22:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:21:58.494 07:22:31 -- common/autotest_common.sh@10 -- # set +x 00:21:58.494 ************************************ 00:21:58.494 START TEST raid_rebuild_test_io 00:21:58.494 ************************************ 00:21:58.494 07:22:31 -- common/autotest_common.sh@1102 -- # raid_rebuild_test raid1 4 false true 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:21:58.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@544 -- # raid_pid=130973 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@545 -- # waitforlisten 130973 /var/tmp/spdk-raid.sock 00:21:58.494 07:22:31 -- common/autotest_common.sh@817 -- # '[' -z 130973 ']' 00:21:58.494 07:22:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:21:58.494 07:22:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:58.494 07:22:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:21:58.494 07:22:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:58.494 07:22:31 -- common/autotest_common.sh@10 -- # set +x 00:21:58.494 07:22:31 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:21:58.494 [2024-02-13 07:22:32.027905] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:21:58.494 [2024-02-13 07:22:32.028332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130973 ] 00:21:58.494 I/O size of 3145728 is greater than zero copy threshold (65536). 00:21:58.494 Zero copy mechanism will not be used. 00:21:58.752 [2024-02-13 07:22:32.193428] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.752 [2024-02-13 07:22:32.376934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.011 [2024-02-13 07:22:32.549401] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:21:59.269 07:22:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:59.269 07:22:32 -- common/autotest_common.sh@850 -- # return 0 00:21:59.269 07:22:32 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:59.269 07:22:32 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:59.269 07:22:32 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:21:59.528 BaseBdev1 00:21:59.528 07:22:33 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:59.528 07:22:33 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:59.528 07:22:33 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:21:59.787 BaseBdev2 00:21:59.787 07:22:33 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:21:59.787 07:22:33 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:21:59.787 07:22:33 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:00.046 BaseBdev3 00:22:00.046 07:22:33 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:00.046 07:22:33 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:22:00.046 07:22:33 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:22:00.304 BaseBdev4 00:22:00.304 07:22:33 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:00.563 spare_malloc 00:22:00.563 07:22:34 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:00.822 spare_delay 00:22:00.822 07:22:34 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:01.081 [2024-02-13 07:22:34.529234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:01.081 [2024-02-13 07:22:34.529567] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:01.081 [2024-02-13 07:22:34.529645] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:22:01.081 [2024-02-13 07:22:34.529909] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:01.081 [2024-02-13 07:22:34.532337] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:01.082 [2024-02-13 07:22:34.532553] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:01.082 spare 00:22:01.082 07:22:34 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:01.082 [2024-02-13 07:22:34.721413] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:01.082 [2024-02-13 07:22:34.723729] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:01.082 [2024-02-13 07:22:34.723929] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:01.082 [2024-02-13 07:22:34.724015] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:01.082 [2024-02-13 07:22:34.724227] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:22:01.082 [2024-02-13 07:22:34.724328] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:22:01.082 [2024-02-13 07:22:34.724584] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:22:01.082 [2024-02-13 07:22:34.725169] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:22:01.082 [2024-02-13 07:22:34.725326] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:22:01.082 [2024-02-13 07:22:34.725699] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:01.082 07:22:34 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:01.082 07:22:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:01.082 07:22:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:01.082 07:22:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:01.082 07:22:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:01.082 07:22:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:01.082 07:22:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:01.082 07:22:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:01.082 07:22:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:01.082 07:22:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:01.082 07:22:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:01.082 07:22:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:01.340 07:22:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:01.340 "name": "raid_bdev1", 00:22:01.340 "uuid": "e7914b85-0409-42f8-b3e2-d2e9fa30598b", 00:22:01.340 "strip_size_kb": 0, 00:22:01.340 "state": "online", 00:22:01.340 "raid_level": "raid1", 00:22:01.340 "superblock": false, 00:22:01.340 "num_base_bdevs": 4, 00:22:01.340 "num_base_bdevs_discovered": 4, 00:22:01.340 "num_base_bdevs_operational": 4, 00:22:01.340 "base_bdevs_list": [ 00:22:01.340 { 00:22:01.340 "name": "BaseBdev1", 00:22:01.340 "uuid": "0ea65aa0-6c73-4c59-8d32-1a18641b451b", 00:22:01.340 "is_configured": true, 00:22:01.340 "data_offset": 0, 00:22:01.340 "data_size": 65536 00:22:01.340 }, 00:22:01.340 { 00:22:01.340 "name": "BaseBdev2", 00:22:01.340 "uuid": "7b2da779-5544-4b77-8a5a-66210c894855", 00:22:01.340 "is_configured": true, 00:22:01.340 "data_offset": 0, 00:22:01.340 "data_size": 65536 00:22:01.340 }, 00:22:01.340 { 00:22:01.340 "name": "BaseBdev3", 00:22:01.340 "uuid": "139ff937-b53c-4ebc-88f5-7fdbc56dc4b0", 00:22:01.340 "is_configured": true, 00:22:01.340 "data_offset": 0, 00:22:01.340 "data_size": 65536 00:22:01.340 }, 00:22:01.340 { 00:22:01.340 "name": "BaseBdev4", 00:22:01.340 "uuid": "979c2fd1-9310-4900-8cfa-4eefa728063c", 00:22:01.340 "is_configured": true, 00:22:01.340 "data_offset": 0, 00:22:01.340 "data_size": 65536 00:22:01.340 } 00:22:01.340 ] 00:22:01.340 }' 00:22:01.340 07:22:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:01.340 07:22:34 -- common/autotest_common.sh@10 -- # set +x 00:22:01.908 07:22:35 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:01.908 07:22:35 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:02.166 [2024-02-13 07:22:35.782163] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:02.166 07:22:35 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:22:02.166 07:22:35 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.166 07:22:35 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:02.425 07:22:35 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:22:02.425 07:22:35 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:02.425 07:22:35 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:02.425 07:22:35 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:02.425 [2024-02-13 07:22:36.092540] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:22:02.425 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:02.425 Zero copy mechanism will not be used. 00:22:02.425 Running I/O for 60 seconds... 00:22:02.684 [2024-02-13 07:22:36.173007] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:02.684 [2024-02-13 07:22:36.179741] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:22:02.684 07:22:36 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:02.684 07:22:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:02.684 07:22:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:02.684 07:22:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:02.684 07:22:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:02.684 07:22:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:02.684 07:22:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:02.684 07:22:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:02.684 07:22:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:02.684 07:22:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:02.684 07:22:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:02.684 07:22:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:02.943 07:22:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:02.943 "name": "raid_bdev1", 00:22:02.943 "uuid": "e7914b85-0409-42f8-b3e2-d2e9fa30598b", 00:22:02.943 "strip_size_kb": 0, 00:22:02.943 "state": "online", 00:22:02.943 "raid_level": "raid1", 00:22:02.943 "superblock": false, 00:22:02.943 "num_base_bdevs": 4, 00:22:02.943 "num_base_bdevs_discovered": 3, 00:22:02.943 "num_base_bdevs_operational": 3, 00:22:02.943 "base_bdevs_list": [ 00:22:02.943 { 00:22:02.943 "name": null, 00:22:02.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:02.943 "is_configured": false, 00:22:02.943 "data_offset": 0, 00:22:02.943 "data_size": 65536 00:22:02.943 }, 00:22:02.943 { 00:22:02.943 "name": "BaseBdev2", 00:22:02.943 "uuid": "7b2da779-5544-4b77-8a5a-66210c894855", 00:22:02.943 "is_configured": true, 00:22:02.943 "data_offset": 0, 00:22:02.943 "data_size": 65536 00:22:02.943 }, 00:22:02.943 { 00:22:02.943 "name": "BaseBdev3", 00:22:02.943 "uuid": "139ff937-b53c-4ebc-88f5-7fdbc56dc4b0", 00:22:02.943 "is_configured": true, 00:22:02.943 "data_offset": 0, 00:22:02.943 "data_size": 65536 00:22:02.943 }, 00:22:02.943 { 00:22:02.943 "name": "BaseBdev4", 00:22:02.943 "uuid": "979c2fd1-9310-4900-8cfa-4eefa728063c", 00:22:02.943 "is_configured": true, 00:22:02.943 "data_offset": 0, 00:22:02.943 "data_size": 65536 00:22:02.943 } 00:22:02.943 ] 00:22:02.943 }' 00:22:02.943 07:22:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:02.943 07:22:36 -- common/autotest_common.sh@10 -- # set +x 00:22:03.511 07:22:37 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:03.770 [2024-02-13 07:22:37.323057] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:03.770 [2024-02-13 07:22:37.323419] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:03.770 07:22:37 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:03.770 [2024-02-13 07:22:37.368952] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:22:03.770 [2024-02-13 07:22:37.371098] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:04.028 [2024-02-13 07:22:37.479796] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:04.028 [2024-02-13 07:22:37.480504] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:04.028 [2024-02-13 07:22:37.692231] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:04.028 [2024-02-13 07:22:37.692855] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:04.596 [2024-02-13 07:22:38.035538] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:04.596 [2024-02-13 07:22:38.037029] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:04.596 [2024-02-13 07:22:38.246267] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:04.596 [2024-02-13 07:22:38.247140] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:04.855 07:22:38 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:04.855 07:22:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:04.855 07:22:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:04.855 07:22:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:04.855 07:22:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:04.855 07:22:38 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:04.855 07:22:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.114 07:22:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:05.114 "name": "raid_bdev1", 00:22:05.114 "uuid": "e7914b85-0409-42f8-b3e2-d2e9fa30598b", 00:22:05.114 "strip_size_kb": 0, 00:22:05.114 "state": "online", 00:22:05.114 "raid_level": "raid1", 00:22:05.114 "superblock": false, 00:22:05.114 "num_base_bdevs": 4, 00:22:05.114 "num_base_bdevs_discovered": 4, 00:22:05.114 "num_base_bdevs_operational": 4, 00:22:05.114 "process": { 00:22:05.114 "type": "rebuild", 00:22:05.114 "target": "spare", 00:22:05.114 "progress": { 00:22:05.114 "blocks": 12288, 00:22:05.114 "percent": 18 00:22:05.114 } 00:22:05.114 }, 00:22:05.114 "base_bdevs_list": [ 00:22:05.114 { 00:22:05.114 "name": "spare", 00:22:05.114 "uuid": "404ccd5d-645d-543e-97d1-4d0f75ddeac9", 00:22:05.114 "is_configured": true, 00:22:05.114 "data_offset": 0, 00:22:05.114 "data_size": 65536 00:22:05.114 }, 00:22:05.114 { 00:22:05.114 "name": "BaseBdev2", 00:22:05.114 "uuid": "7b2da779-5544-4b77-8a5a-66210c894855", 00:22:05.114 "is_configured": true, 00:22:05.114 "data_offset": 0, 00:22:05.114 "data_size": 65536 00:22:05.114 }, 00:22:05.114 { 00:22:05.114 "name": "BaseBdev3", 00:22:05.114 "uuid": "139ff937-b53c-4ebc-88f5-7fdbc56dc4b0", 00:22:05.114 "is_configured": true, 00:22:05.114 "data_offset": 0, 00:22:05.114 "data_size": 65536 00:22:05.114 }, 00:22:05.114 { 00:22:05.114 "name": "BaseBdev4", 00:22:05.114 "uuid": "979c2fd1-9310-4900-8cfa-4eefa728063c", 00:22:05.114 "is_configured": true, 00:22:05.114 "data_offset": 0, 00:22:05.114 "data_size": 65536 00:22:05.114 } 00:22:05.114 ] 00:22:05.114 }' 00:22:05.114 07:22:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:05.114 [2024-02-13 07:22:38.619194] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:05.114 [2024-02-13 07:22:38.620026] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:05.114 07:22:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:05.114 07:22:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:05.114 07:22:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:05.114 07:22:38 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:05.114 [2024-02-13 07:22:38.745256] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:05.373 [2024-02-13 07:22:38.949445] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:05.631 [2024-02-13 07:22:39.073377] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:05.631 [2024-02-13 07:22:39.076974] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:05.631 [2024-02-13 07:22:39.115751] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005930 00:22:05.631 07:22:39 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:05.631 07:22:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:05.631 07:22:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:05.631 07:22:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:05.631 07:22:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:05.631 07:22:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:05.631 07:22:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:05.631 07:22:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:05.631 07:22:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:05.631 07:22:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:05.631 07:22:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:05.631 07:22:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:05.890 07:22:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:05.890 "name": "raid_bdev1", 00:22:05.890 "uuid": "e7914b85-0409-42f8-b3e2-d2e9fa30598b", 00:22:05.890 "strip_size_kb": 0, 00:22:05.890 "state": "online", 00:22:05.890 "raid_level": "raid1", 00:22:05.890 "superblock": false, 00:22:05.890 "num_base_bdevs": 4, 00:22:05.890 "num_base_bdevs_discovered": 3, 00:22:05.890 "num_base_bdevs_operational": 3, 00:22:05.890 "base_bdevs_list": [ 00:22:05.890 { 00:22:05.890 "name": null, 00:22:05.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:05.890 "is_configured": false, 00:22:05.890 "data_offset": 0, 00:22:05.890 "data_size": 65536 00:22:05.890 }, 00:22:05.890 { 00:22:05.890 "name": "BaseBdev2", 00:22:05.890 "uuid": "7b2da779-5544-4b77-8a5a-66210c894855", 00:22:05.890 "is_configured": true, 00:22:05.890 "data_offset": 0, 00:22:05.890 "data_size": 65536 00:22:05.890 }, 00:22:05.890 { 00:22:05.890 "name": "BaseBdev3", 00:22:05.890 "uuid": "139ff937-b53c-4ebc-88f5-7fdbc56dc4b0", 00:22:05.890 "is_configured": true, 00:22:05.890 "data_offset": 0, 00:22:05.890 "data_size": 65536 00:22:05.890 }, 00:22:05.890 { 00:22:05.890 "name": "BaseBdev4", 00:22:05.890 "uuid": "979c2fd1-9310-4900-8cfa-4eefa728063c", 00:22:05.890 "is_configured": true, 00:22:05.890 "data_offset": 0, 00:22:05.890 "data_size": 65536 00:22:05.890 } 00:22:05.890 ] 00:22:05.890 }' 00:22:05.890 07:22:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:05.890 07:22:39 -- common/autotest_common.sh@10 -- # set +x 00:22:06.457 07:22:40 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:06.457 07:22:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:06.457 07:22:40 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:06.457 07:22:40 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:06.457 07:22:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:06.457 07:22:40 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:06.457 07:22:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:06.716 07:22:40 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:06.716 "name": "raid_bdev1", 00:22:06.716 "uuid": "e7914b85-0409-42f8-b3e2-d2e9fa30598b", 00:22:06.716 "strip_size_kb": 0, 00:22:06.716 "state": "online", 00:22:06.716 "raid_level": "raid1", 00:22:06.716 "superblock": false, 00:22:06.716 "num_base_bdevs": 4, 00:22:06.716 "num_base_bdevs_discovered": 3, 00:22:06.716 "num_base_bdevs_operational": 3, 00:22:06.716 "base_bdevs_list": [ 00:22:06.716 { 00:22:06.716 "name": null, 00:22:06.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:06.716 "is_configured": false, 00:22:06.716 "data_offset": 0, 00:22:06.716 "data_size": 65536 00:22:06.716 }, 00:22:06.716 { 00:22:06.716 "name": "BaseBdev2", 00:22:06.716 "uuid": "7b2da779-5544-4b77-8a5a-66210c894855", 00:22:06.716 "is_configured": true, 00:22:06.716 "data_offset": 0, 00:22:06.716 "data_size": 65536 00:22:06.716 }, 00:22:06.716 { 00:22:06.716 "name": "BaseBdev3", 00:22:06.716 "uuid": "139ff937-b53c-4ebc-88f5-7fdbc56dc4b0", 00:22:06.716 "is_configured": true, 00:22:06.716 "data_offset": 0, 00:22:06.716 "data_size": 65536 00:22:06.716 }, 00:22:06.716 { 00:22:06.716 "name": "BaseBdev4", 00:22:06.716 "uuid": "979c2fd1-9310-4900-8cfa-4eefa728063c", 00:22:06.716 "is_configured": true, 00:22:06.716 "data_offset": 0, 00:22:06.716 "data_size": 65536 00:22:06.716 } 00:22:06.716 ] 00:22:06.716 }' 00:22:06.716 07:22:40 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:06.975 07:22:40 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:06.975 07:22:40 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:06.975 07:22:40 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:06.975 07:22:40 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:07.233 [2024-02-13 07:22:40.748016] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:07.233 [2024-02-13 07:22:40.748400] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:07.233 [2024-02-13 07:22:40.800344] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:07.233 [2024-02-13 07:22:40.802891] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:07.233 07:22:40 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:07.233 [2024-02-13 07:22:40.920174] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:07.233 [2024-02-13 07:22:40.922002] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:07.491 [2024-02-13 07:22:41.147260] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:07.491 [2024-02-13 07:22:41.148389] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:08.058 [2024-02-13 07:22:41.496348] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:08.058 [2024-02-13 07:22:41.729031] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:08.058 [2024-02-13 07:22:41.730130] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:08.316 07:22:41 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:08.316 07:22:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:08.316 07:22:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:08.316 07:22:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:08.316 07:22:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:08.316 07:22:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.316 07:22:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:08.575 07:22:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:08.575 "name": "raid_bdev1", 00:22:08.575 "uuid": "e7914b85-0409-42f8-b3e2-d2e9fa30598b", 00:22:08.575 "strip_size_kb": 0, 00:22:08.575 "state": "online", 00:22:08.575 "raid_level": "raid1", 00:22:08.575 "superblock": false, 00:22:08.575 "num_base_bdevs": 4, 00:22:08.575 "num_base_bdevs_discovered": 4, 00:22:08.575 "num_base_bdevs_operational": 4, 00:22:08.575 "process": { 00:22:08.575 "type": "rebuild", 00:22:08.575 "target": "spare", 00:22:08.575 "progress": { 00:22:08.575 "blocks": 12288, 00:22:08.575 "percent": 18 00:22:08.576 } 00:22:08.576 }, 00:22:08.576 "base_bdevs_list": [ 00:22:08.576 { 00:22:08.576 "name": "spare", 00:22:08.576 "uuid": "404ccd5d-645d-543e-97d1-4d0f75ddeac9", 00:22:08.576 "is_configured": true, 00:22:08.576 "data_offset": 0, 00:22:08.576 "data_size": 65536 00:22:08.576 }, 00:22:08.576 { 00:22:08.576 "name": "BaseBdev2", 00:22:08.576 "uuid": "7b2da779-5544-4b77-8a5a-66210c894855", 00:22:08.576 "is_configured": true, 00:22:08.576 "data_offset": 0, 00:22:08.576 "data_size": 65536 00:22:08.576 }, 00:22:08.576 { 00:22:08.576 "name": "BaseBdev3", 00:22:08.576 "uuid": "139ff937-b53c-4ebc-88f5-7fdbc56dc4b0", 00:22:08.576 "is_configured": true, 00:22:08.576 "data_offset": 0, 00:22:08.576 "data_size": 65536 00:22:08.576 }, 00:22:08.576 { 00:22:08.576 "name": "BaseBdev4", 00:22:08.576 "uuid": "979c2fd1-9310-4900-8cfa-4eefa728063c", 00:22:08.576 "is_configured": true, 00:22:08.576 "data_offset": 0, 00:22:08.576 "data_size": 65536 00:22:08.576 } 00:22:08.576 ] 00:22:08.576 }' 00:22:08.576 07:22:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:08.576 07:22:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:08.576 07:22:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:08.576 [2024-02-13 07:22:42.153893] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:08.576 [2024-02-13 07:22:42.154762] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:08.576 07:22:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:08.576 07:22:42 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:22:08.576 07:22:42 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:08.576 07:22:42 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:08.576 07:22:42 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:08.576 07:22:42 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:08.834 [2024-02-13 07:22:42.392036] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:08.834 [2024-02-13 07:22:42.429694] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005930 00:22:08.834 [2024-02-13 07:22:42.429888] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ba0 00:22:08.834 07:22:42 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:08.834 07:22:42 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:08.834 07:22:42 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:08.834 07:22:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:08.834 07:22:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:08.834 07:22:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:08.834 07:22:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:08.834 07:22:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:08.834 07:22:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.093 [2024-02-13 07:22:42.570346] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:09.093 [2024-02-13 07:22:42.571639] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:09.093 07:22:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:09.093 "name": "raid_bdev1", 00:22:09.093 "uuid": "e7914b85-0409-42f8-b3e2-d2e9fa30598b", 00:22:09.093 "strip_size_kb": 0, 00:22:09.093 "state": "online", 00:22:09.093 "raid_level": "raid1", 00:22:09.093 "superblock": false, 00:22:09.093 "num_base_bdevs": 4, 00:22:09.093 "num_base_bdevs_discovered": 3, 00:22:09.093 "num_base_bdevs_operational": 3, 00:22:09.093 "process": { 00:22:09.093 "type": "rebuild", 00:22:09.093 "target": "spare", 00:22:09.093 "progress": { 00:22:09.093 "blocks": 20480, 00:22:09.093 "percent": 31 00:22:09.093 } 00:22:09.093 }, 00:22:09.093 "base_bdevs_list": [ 00:22:09.093 { 00:22:09.093 "name": "spare", 00:22:09.093 "uuid": "404ccd5d-645d-543e-97d1-4d0f75ddeac9", 00:22:09.093 "is_configured": true, 00:22:09.093 "data_offset": 0, 00:22:09.093 "data_size": 65536 00:22:09.093 }, 00:22:09.093 { 00:22:09.093 "name": null, 00:22:09.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.093 "is_configured": false, 00:22:09.093 "data_offset": 0, 00:22:09.093 "data_size": 65536 00:22:09.093 }, 00:22:09.093 { 00:22:09.093 "name": "BaseBdev3", 00:22:09.093 "uuid": "139ff937-b53c-4ebc-88f5-7fdbc56dc4b0", 00:22:09.093 "is_configured": true, 00:22:09.093 "data_offset": 0, 00:22:09.093 "data_size": 65536 00:22:09.093 }, 00:22:09.093 { 00:22:09.093 "name": "BaseBdev4", 00:22:09.093 "uuid": "979c2fd1-9310-4900-8cfa-4eefa728063c", 00:22:09.093 "is_configured": true, 00:22:09.093 "data_offset": 0, 00:22:09.093 "data_size": 65536 00:22:09.093 } 00:22:09.093 ] 00:22:09.093 }' 00:22:09.093 07:22:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:09.093 07:22:42 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:09.093 07:22:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:09.352 [2024-02-13 07:22:42.799852] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:09.352 07:22:42 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:09.352 07:22:42 -- bdev/bdev_raid.sh@657 -- # local timeout=543 00:22:09.352 07:22:42 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:09.352 07:22:42 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:09.352 07:22:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:09.352 07:22:42 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:09.352 07:22:42 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:09.352 07:22:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:09.352 07:22:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:09.352 07:22:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:09.614 07:22:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:09.614 "name": "raid_bdev1", 00:22:09.614 "uuid": "e7914b85-0409-42f8-b3e2-d2e9fa30598b", 00:22:09.614 "strip_size_kb": 0, 00:22:09.614 "state": "online", 00:22:09.614 "raid_level": "raid1", 00:22:09.614 "superblock": false, 00:22:09.614 "num_base_bdevs": 4, 00:22:09.614 "num_base_bdevs_discovered": 3, 00:22:09.614 "num_base_bdevs_operational": 3, 00:22:09.614 "process": { 00:22:09.614 "type": "rebuild", 00:22:09.614 "target": "spare", 00:22:09.614 "progress": { 00:22:09.614 "blocks": 24576, 00:22:09.614 "percent": 37 00:22:09.614 } 00:22:09.614 }, 00:22:09.614 "base_bdevs_list": [ 00:22:09.614 { 00:22:09.614 "name": "spare", 00:22:09.614 "uuid": "404ccd5d-645d-543e-97d1-4d0f75ddeac9", 00:22:09.614 "is_configured": true, 00:22:09.614 "data_offset": 0, 00:22:09.614 "data_size": 65536 00:22:09.614 }, 00:22:09.614 { 00:22:09.614 "name": null, 00:22:09.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:09.614 "is_configured": false, 00:22:09.615 "data_offset": 0, 00:22:09.615 "data_size": 65536 00:22:09.615 }, 00:22:09.615 { 00:22:09.615 "name": "BaseBdev3", 00:22:09.615 "uuid": "139ff937-b53c-4ebc-88f5-7fdbc56dc4b0", 00:22:09.615 "is_configured": true, 00:22:09.615 "data_offset": 0, 00:22:09.615 "data_size": 65536 00:22:09.615 }, 00:22:09.615 { 00:22:09.615 "name": "BaseBdev4", 00:22:09.615 "uuid": "979c2fd1-9310-4900-8cfa-4eefa728063c", 00:22:09.615 "is_configured": true, 00:22:09.615 "data_offset": 0, 00:22:09.615 "data_size": 65536 00:22:09.615 } 00:22:09.615 ] 00:22:09.615 }' 00:22:09.615 07:22:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:09.615 07:22:43 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:09.615 07:22:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:09.615 [2024-02-13 07:22:43.140228] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:09.615 [2024-02-13 07:22:43.140952] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:09.615 07:22:43 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:09.615 07:22:43 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:09.615 [2024-02-13 07:22:43.286042] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:09.885 [2024-02-13 07:22:43.504088] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:09.885 [2024-02-13 07:22:43.504885] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:10.463 [2024-02-13 07:22:43.851879] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:22:10.721 07:22:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:10.721 07:22:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:10.721 07:22:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:10.721 07:22:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:10.721 07:22:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:10.721 07:22:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:10.721 07:22:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:10.721 07:22:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:10.722 [2024-02-13 07:22:44.291504] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:22:10.980 07:22:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:10.980 "name": "raid_bdev1", 00:22:10.980 "uuid": "e7914b85-0409-42f8-b3e2-d2e9fa30598b", 00:22:10.980 "strip_size_kb": 0, 00:22:10.980 "state": "online", 00:22:10.980 "raid_level": "raid1", 00:22:10.980 "superblock": false, 00:22:10.980 "num_base_bdevs": 4, 00:22:10.980 "num_base_bdevs_discovered": 3, 00:22:10.980 "num_base_bdevs_operational": 3, 00:22:10.980 "process": { 00:22:10.980 "type": "rebuild", 00:22:10.980 "target": "spare", 00:22:10.980 "progress": { 00:22:10.980 "blocks": 45056, 00:22:10.980 "percent": 68 00:22:10.980 } 00:22:10.980 }, 00:22:10.980 "base_bdevs_list": [ 00:22:10.980 { 00:22:10.980 "name": "spare", 00:22:10.980 "uuid": "404ccd5d-645d-543e-97d1-4d0f75ddeac9", 00:22:10.980 "is_configured": true, 00:22:10.980 "data_offset": 0, 00:22:10.980 "data_size": 65536 00:22:10.980 }, 00:22:10.980 { 00:22:10.980 "name": null, 00:22:10.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:10.980 "is_configured": false, 00:22:10.980 "data_offset": 0, 00:22:10.980 "data_size": 65536 00:22:10.980 }, 00:22:10.980 { 00:22:10.980 "name": "BaseBdev3", 00:22:10.980 "uuid": "139ff937-b53c-4ebc-88f5-7fdbc56dc4b0", 00:22:10.980 "is_configured": true, 00:22:10.980 "data_offset": 0, 00:22:10.980 "data_size": 65536 00:22:10.980 }, 00:22:10.980 { 00:22:10.980 "name": "BaseBdev4", 00:22:10.980 "uuid": "979c2fd1-9310-4900-8cfa-4eefa728063c", 00:22:10.980 "is_configured": true, 00:22:10.980 "data_offset": 0, 00:22:10.980 "data_size": 65536 00:22:10.980 } 00:22:10.981 ] 00:22:10.981 }' 00:22:10.981 07:22:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:10.981 07:22:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:10.981 07:22:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:10.981 07:22:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:10.981 07:22:44 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:11.239 [2024-02-13 07:22:44.722963] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:22:11.498 [2024-02-13 07:22:44.939893] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:22:12.065 07:22:45 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:12.065 07:22:45 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:12.065 07:22:45 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:12.065 07:22:45 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:12.065 07:22:45 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:12.065 07:22:45 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:12.065 07:22:45 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:12.065 07:22:45 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:12.065 [2024-02-13 07:22:45.708508] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:12.324 07:22:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:12.324 "name": "raid_bdev1", 00:22:12.324 "uuid": "e7914b85-0409-42f8-b3e2-d2e9fa30598b", 00:22:12.324 "strip_size_kb": 0, 00:22:12.324 "state": "online", 00:22:12.324 "raid_level": "raid1", 00:22:12.324 "superblock": false, 00:22:12.324 "num_base_bdevs": 4, 00:22:12.324 "num_base_bdevs_discovered": 3, 00:22:12.324 "num_base_bdevs_operational": 3, 00:22:12.324 "process": { 00:22:12.324 "type": "rebuild", 00:22:12.324 "target": "spare", 00:22:12.324 "progress": { 00:22:12.324 "blocks": 65536, 00:22:12.324 "percent": 100 00:22:12.324 } 00:22:12.324 }, 00:22:12.324 "base_bdevs_list": [ 00:22:12.324 { 00:22:12.324 "name": "spare", 00:22:12.324 "uuid": "404ccd5d-645d-543e-97d1-4d0f75ddeac9", 00:22:12.324 "is_configured": true, 00:22:12.324 "data_offset": 0, 00:22:12.324 "data_size": 65536 00:22:12.324 }, 00:22:12.324 { 00:22:12.324 "name": null, 00:22:12.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:12.324 "is_configured": false, 00:22:12.324 "data_offset": 0, 00:22:12.324 "data_size": 65536 00:22:12.324 }, 00:22:12.324 { 00:22:12.324 "name": "BaseBdev3", 00:22:12.324 "uuid": "139ff937-b53c-4ebc-88f5-7fdbc56dc4b0", 00:22:12.324 "is_configured": true, 00:22:12.324 "data_offset": 0, 00:22:12.324 "data_size": 65536 00:22:12.324 }, 00:22:12.324 { 00:22:12.324 "name": "BaseBdev4", 00:22:12.324 "uuid": "979c2fd1-9310-4900-8cfa-4eefa728063c", 00:22:12.324 "is_configured": true, 00:22:12.324 "data_offset": 0, 00:22:12.324 "data_size": 65536 00:22:12.324 } 00:22:12.324 ] 00:22:12.324 }' 00:22:12.324 07:22:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:12.324 [2024-02-13 07:22:45.798778] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:12.324 [2024-02-13 07:22:45.808683] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:12.324 07:22:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:12.324 07:22:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:12.324 07:22:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:12.324 07:22:45 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:13.262 07:22:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:13.262 07:22:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:13.262 07:22:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:13.262 07:22:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:13.262 07:22:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:13.262 07:22:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:13.262 07:22:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.262 07:22:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.520 07:22:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:13.520 "name": "raid_bdev1", 00:22:13.520 "uuid": "e7914b85-0409-42f8-b3e2-d2e9fa30598b", 00:22:13.520 "strip_size_kb": 0, 00:22:13.520 "state": "online", 00:22:13.520 "raid_level": "raid1", 00:22:13.520 "superblock": false, 00:22:13.520 "num_base_bdevs": 4, 00:22:13.520 "num_base_bdevs_discovered": 3, 00:22:13.520 "num_base_bdevs_operational": 3, 00:22:13.520 "base_bdevs_list": [ 00:22:13.520 { 00:22:13.520 "name": "spare", 00:22:13.521 "uuid": "404ccd5d-645d-543e-97d1-4d0f75ddeac9", 00:22:13.521 "is_configured": true, 00:22:13.521 "data_offset": 0, 00:22:13.521 "data_size": 65536 00:22:13.521 }, 00:22:13.521 { 00:22:13.521 "name": null, 00:22:13.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.521 "is_configured": false, 00:22:13.521 "data_offset": 0, 00:22:13.521 "data_size": 65536 00:22:13.521 }, 00:22:13.521 { 00:22:13.521 "name": "BaseBdev3", 00:22:13.521 "uuid": "139ff937-b53c-4ebc-88f5-7fdbc56dc4b0", 00:22:13.521 "is_configured": true, 00:22:13.521 "data_offset": 0, 00:22:13.521 "data_size": 65536 00:22:13.521 }, 00:22:13.521 { 00:22:13.521 "name": "BaseBdev4", 00:22:13.521 "uuid": "979c2fd1-9310-4900-8cfa-4eefa728063c", 00:22:13.521 "is_configured": true, 00:22:13.521 "data_offset": 0, 00:22:13.521 "data_size": 65536 00:22:13.521 } 00:22:13.521 ] 00:22:13.521 }' 00:22:13.521 07:22:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:13.521 07:22:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:13.521 07:22:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:13.780 07:22:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:13.780 07:22:47 -- bdev/bdev_raid.sh@660 -- # break 00:22:13.780 07:22:47 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:13.780 07:22:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:13.780 07:22:47 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:13.780 07:22:47 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:13.780 07:22:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:13.780 07:22:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:13.780 07:22:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:13.780 07:22:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:13.780 "name": "raid_bdev1", 00:22:13.780 "uuid": "e7914b85-0409-42f8-b3e2-d2e9fa30598b", 00:22:13.780 "strip_size_kb": 0, 00:22:13.780 "state": "online", 00:22:13.780 "raid_level": "raid1", 00:22:13.780 "superblock": false, 00:22:13.780 "num_base_bdevs": 4, 00:22:13.780 "num_base_bdevs_discovered": 3, 00:22:13.780 "num_base_bdevs_operational": 3, 00:22:13.780 "base_bdevs_list": [ 00:22:13.780 { 00:22:13.780 "name": "spare", 00:22:13.780 "uuid": "404ccd5d-645d-543e-97d1-4d0f75ddeac9", 00:22:13.780 "is_configured": true, 00:22:13.780 "data_offset": 0, 00:22:13.780 "data_size": 65536 00:22:13.780 }, 00:22:13.780 { 00:22:13.780 "name": null, 00:22:13.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:13.780 "is_configured": false, 00:22:13.780 "data_offset": 0, 00:22:13.780 "data_size": 65536 00:22:13.780 }, 00:22:13.780 { 00:22:13.780 "name": "BaseBdev3", 00:22:13.780 "uuid": "139ff937-b53c-4ebc-88f5-7fdbc56dc4b0", 00:22:13.780 "is_configured": true, 00:22:13.780 "data_offset": 0, 00:22:13.780 "data_size": 65536 00:22:13.780 }, 00:22:13.780 { 00:22:13.780 "name": "BaseBdev4", 00:22:13.780 "uuid": "979c2fd1-9310-4900-8cfa-4eefa728063c", 00:22:13.780 "is_configured": true, 00:22:13.780 "data_offset": 0, 00:22:13.780 "data_size": 65536 00:22:13.780 } 00:22:13.780 ] 00:22:13.780 }' 00:22:13.780 07:22:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:14.039 07:22:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:14.039 07:22:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:14.039 07:22:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:14.039 07:22:47 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:14.039 07:22:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:14.039 07:22:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:14.039 07:22:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:14.039 07:22:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:14.039 07:22:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:14.039 07:22:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:14.039 07:22:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:14.039 07:22:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:14.039 07:22:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:14.039 07:22:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:14.039 07:22:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:14.298 07:22:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:14.298 "name": "raid_bdev1", 00:22:14.298 "uuid": "e7914b85-0409-42f8-b3e2-d2e9fa30598b", 00:22:14.298 "strip_size_kb": 0, 00:22:14.298 "state": "online", 00:22:14.298 "raid_level": "raid1", 00:22:14.298 "superblock": false, 00:22:14.298 "num_base_bdevs": 4, 00:22:14.298 "num_base_bdevs_discovered": 3, 00:22:14.298 "num_base_bdevs_operational": 3, 00:22:14.298 "base_bdevs_list": [ 00:22:14.298 { 00:22:14.298 "name": "spare", 00:22:14.298 "uuid": "404ccd5d-645d-543e-97d1-4d0f75ddeac9", 00:22:14.298 "is_configured": true, 00:22:14.298 "data_offset": 0, 00:22:14.298 "data_size": 65536 00:22:14.298 }, 00:22:14.298 { 00:22:14.298 "name": null, 00:22:14.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:14.298 "is_configured": false, 00:22:14.298 "data_offset": 0, 00:22:14.298 "data_size": 65536 00:22:14.298 }, 00:22:14.298 { 00:22:14.298 "name": "BaseBdev3", 00:22:14.298 "uuid": "139ff937-b53c-4ebc-88f5-7fdbc56dc4b0", 00:22:14.298 "is_configured": true, 00:22:14.298 "data_offset": 0, 00:22:14.298 "data_size": 65536 00:22:14.298 }, 00:22:14.298 { 00:22:14.298 "name": "BaseBdev4", 00:22:14.298 "uuid": "979c2fd1-9310-4900-8cfa-4eefa728063c", 00:22:14.298 "is_configured": true, 00:22:14.298 "data_offset": 0, 00:22:14.298 "data_size": 65536 00:22:14.298 } 00:22:14.298 ] 00:22:14.298 }' 00:22:14.299 07:22:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:14.299 07:22:47 -- common/autotest_common.sh@10 -- # set +x 00:22:14.866 07:22:48 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:15.125 [2024-02-13 07:22:48.760574] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:15.125 [2024-02-13 07:22:48.760791] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:15.384 00:22:15.384 Latency(us) 00:22:15.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.384 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:15.384 raid_bdev1 : 12.77 87.66 262.97 0.00 0.00 15997.16 303.48 118203.11 00:22:15.384 =================================================================================================================== 00:22:15.384 Total : 87.66 262.97 0.00 0.00 15997.16 303.48 118203.11 00:22:15.384 [2024-02-13 07:22:48.875609] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:15.384 [2024-02-13 07:22:48.875865] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:15.384 0 00:22:15.384 [2024-02-13 07:22:48.876012] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:15.384 [2024-02-13 07:22:48.876031] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:22:15.384 07:22:48 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:15.384 07:22:48 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:15.643 07:22:49 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:15.643 07:22:49 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:15.643 07:22:49 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:15.643 07:22:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:15.643 07:22:49 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:15.643 07:22:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:15.643 07:22:49 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:15.643 07:22:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:15.643 07:22:49 -- bdev/nbd_common.sh@12 -- # local i 00:22:15.643 07:22:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:15.643 07:22:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:15.643 07:22:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:15.902 /dev/nbd0 00:22:15.902 07:22:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:15.902 07:22:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:15.902 07:22:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:22:15.902 07:22:49 -- common/autotest_common.sh@855 -- # local i 00:22:15.902 07:22:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:15.902 07:22:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:15.902 07:22:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:22:15.902 07:22:49 -- common/autotest_common.sh@859 -- # break 00:22:15.902 07:22:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:15.902 07:22:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:15.902 07:22:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:15.902 1+0 records in 00:22:15.902 1+0 records out 00:22:15.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318641 s, 12.9 MB/s 00:22:15.902 07:22:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.902 07:22:49 -- common/autotest_common.sh@872 -- # size=4096 00:22:15.902 07:22:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:15.902 07:22:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:15.902 07:22:49 -- common/autotest_common.sh@875 -- # return 0 00:22:15.902 07:22:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:15.902 07:22:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:15.902 07:22:49 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:15.902 07:22:49 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:15.902 07:22:49 -- bdev/bdev_raid.sh@678 -- # continue 00:22:15.902 07:22:49 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:15.902 07:22:49 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:15.902 07:22:49 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:15.902 07:22:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:15.902 07:22:49 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:15.902 07:22:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:15.902 07:22:49 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:15.902 07:22:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:15.902 07:22:49 -- bdev/nbd_common.sh@12 -- # local i 00:22:15.902 07:22:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:15.902 07:22:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:15.902 07:22:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:16.161 /dev/nbd1 00:22:16.161 07:22:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:16.161 07:22:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:16.161 07:22:49 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:22:16.161 07:22:49 -- common/autotest_common.sh@855 -- # local i 00:22:16.161 07:22:49 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:16.161 07:22:49 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:16.161 07:22:49 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:22:16.161 07:22:49 -- common/autotest_common.sh@859 -- # break 00:22:16.161 07:22:49 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:16.161 07:22:49 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:16.161 07:22:49 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:16.161 1+0 records in 00:22:16.161 1+0 records out 00:22:16.161 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567041 s, 7.2 MB/s 00:22:16.161 07:22:49 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:16.161 07:22:49 -- common/autotest_common.sh@872 -- # size=4096 00:22:16.161 07:22:49 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:16.161 07:22:49 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:16.161 07:22:49 -- common/autotest_common.sh@875 -- # return 0 00:22:16.161 07:22:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:16.161 07:22:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:16.161 07:22:49 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:16.420 07:22:49 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:16.420 07:22:49 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:16.420 07:22:49 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:16.420 07:22:49 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:16.420 07:22:49 -- bdev/nbd_common.sh@51 -- # local i 00:22:16.420 07:22:49 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:16.420 07:22:49 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:16.420 07:22:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:16.420 07:22:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:16.420 07:22:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:16.420 07:22:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:16.420 07:22:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:16.420 07:22:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:16.420 07:22:50 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:16.679 07:22:50 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:16.679 07:22:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:16.679 07:22:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:16.679 07:22:50 -- bdev/nbd_common.sh@41 -- # break 00:22:16.679 07:22:50 -- bdev/nbd_common.sh@45 -- # return 0 00:22:16.679 07:22:50 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:16.679 07:22:50 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:16.679 07:22:50 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:16.679 07:22:50 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:16.679 07:22:50 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:16.679 07:22:50 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:16.680 07:22:50 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:16.680 07:22:50 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:16.680 07:22:50 -- bdev/nbd_common.sh@12 -- # local i 00:22:16.680 07:22:50 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:16.680 07:22:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:16.680 07:22:50 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:16.939 /dev/nbd1 00:22:16.939 07:22:50 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:16.939 07:22:50 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:16.939 07:22:50 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:22:16.939 07:22:50 -- common/autotest_common.sh@855 -- # local i 00:22:16.939 07:22:50 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:16.939 07:22:50 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:16.939 07:22:50 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:22:16.939 07:22:50 -- common/autotest_common.sh@859 -- # break 00:22:16.939 07:22:50 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:16.939 07:22:50 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:16.939 07:22:50 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:16.939 1+0 records in 00:22:16.939 1+0 records out 00:22:16.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562201 s, 7.3 MB/s 00:22:16.939 07:22:50 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:16.939 07:22:50 -- common/autotest_common.sh@872 -- # size=4096 00:22:16.939 07:22:50 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:16.939 07:22:50 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:16.939 07:22:50 -- common/autotest_common.sh@875 -- # return 0 00:22:16.939 07:22:50 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:16.939 07:22:50 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:16.939 07:22:50 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:22:16.939 07:22:50 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:16.939 07:22:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:16.939 07:22:50 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:16.939 07:22:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:16.939 07:22:50 -- bdev/nbd_common.sh@51 -- # local i 00:22:16.939 07:22:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:16.939 07:22:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:17.198 07:22:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:17.198 07:22:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:17.198 07:22:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:17.198 07:22:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:17.198 07:22:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:17.198 07:22:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:17.198 07:22:50 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:17.456 07:22:50 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:17.456 07:22:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:17.456 07:22:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:17.456 07:22:50 -- bdev/nbd_common.sh@41 -- # break 00:22:17.456 07:22:50 -- bdev/nbd_common.sh@45 -- # return 0 00:22:17.456 07:22:50 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:17.456 07:22:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:17.456 07:22:50 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:17.456 07:22:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:17.456 07:22:50 -- bdev/nbd_common.sh@51 -- # local i 00:22:17.456 07:22:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:17.456 07:22:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:17.715 07:22:51 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:17.715 07:22:51 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:17.715 07:22:51 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:17.715 07:22:51 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:17.715 07:22:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:17.715 07:22:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:17.715 07:22:51 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:17.715 07:22:51 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:17.715 07:22:51 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:17.715 07:22:51 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:17.715 07:22:51 -- bdev/nbd_common.sh@41 -- # break 00:22:17.715 07:22:51 -- bdev/nbd_common.sh@45 -- # return 0 00:22:17.715 07:22:51 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:22:17.715 07:22:51 -- bdev/bdev_raid.sh@709 -- # killprocess 130973 00:22:17.715 07:22:51 -- common/autotest_common.sh@924 -- # '[' -z 130973 ']' 00:22:17.715 07:22:51 -- common/autotest_common.sh@928 -- # kill -0 130973 00:22:17.715 07:22:51 -- common/autotest_common.sh@929 -- # uname 00:22:17.715 07:22:51 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:22:17.715 07:22:51 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 130973 00:22:17.715 07:22:51 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:22:17.715 07:22:51 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:22:17.715 07:22:51 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 130973' 00:22:17.715 killing process with pid 130973 00:22:17.715 07:22:51 -- common/autotest_common.sh@943 -- # kill 130973 00:22:17.715 Received shutdown signal, test time was about 15.272394 seconds 00:22:17.715 00:22:17.715 Latency(us) 00:22:17.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.715 =================================================================================================================== 00:22:17.715 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.715 07:22:51 -- common/autotest_common.sh@948 -- # wait 130973 00:22:17.715 [2024-02-13 07:22:51.367261] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:17.974 [2024-02-13 07:22:51.649199] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:19.373 ************************************ 00:22:19.373 END TEST raid_rebuild_test_io 00:22:19.373 ************************************ 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:19.373 00:22:19.373 real 0m20.699s 00:22:19.373 user 0m31.841s 00:22:19.373 sys 0m2.275s 00:22:19.373 07:22:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:19.373 07:22:52 -- common/autotest_common.sh@10 -- # set +x 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:22:19.373 07:22:52 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:22:19.373 07:22:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:22:19.373 07:22:52 -- common/autotest_common.sh@10 -- # set +x 00:22:19.373 ************************************ 00:22:19.373 START TEST raid_rebuild_test_sb_io 00:22:19.373 ************************************ 00:22:19.373 07:22:52 -- common/autotest_common.sh@1102 -- # raid_rebuild_test raid1 4 true true 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@544 -- # raid_pid=131559 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@545 -- # waitforlisten 131559 /var/tmp/spdk-raid.sock 00:22:19.373 07:22:52 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:22:19.373 07:22:52 -- common/autotest_common.sh@817 -- # '[' -z 131559 ']' 00:22:19.373 07:22:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:19.373 07:22:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:19.373 07:22:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:19.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:19.373 07:22:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:19.373 07:22:52 -- common/autotest_common.sh@10 -- # set +x 00:22:19.373 [2024-02-13 07:22:52.783422] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:22:19.373 [2024-02-13 07:22:52.783814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131559 ] 00:22:19.373 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:19.373 Zero copy mechanism will not be used. 00:22:19.373 [2024-02-13 07:22:52.927008] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:19.632 [2024-02-13 07:22:53.099394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.632 [2024-02-13 07:22:53.271662] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:20.198 07:22:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:20.198 07:22:53 -- common/autotest_common.sh@850 -- # return 0 00:22:20.198 07:22:53 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:20.198 07:22:53 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:20.198 07:22:53 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:22:20.457 BaseBdev1_malloc 00:22:20.457 07:22:54 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:20.715 [2024-02-13 07:22:54.208262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:20.715 [2024-02-13 07:22:54.208505] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.715 [2024-02-13 07:22:54.208581] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:22:20.715 [2024-02-13 07:22:54.208719] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.715 [2024-02-13 07:22:54.210708] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.715 [2024-02-13 07:22:54.210881] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:20.715 BaseBdev1 00:22:20.715 07:22:54 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:20.715 07:22:54 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:20.715 07:22:54 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:22:20.973 BaseBdev2_malloc 00:22:20.973 07:22:54 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:22:20.973 [2024-02-13 07:22:54.649647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:22:20.973 [2024-02-13 07:22:54.649878] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:20.973 [2024-02-13 07:22:54.649956] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:22:20.973 [2024-02-13 07:22:54.650252] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:20.973 [2024-02-13 07:22:54.652146] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:20.973 [2024-02-13 07:22:54.652317] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:22:20.973 BaseBdev2 00:22:20.973 07:22:54 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:20.973 07:22:54 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:20.973 07:22:54 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:22:21.231 BaseBdev3_malloc 00:22:21.231 07:22:54 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:21.489 [2024-02-13 07:22:55.047433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:21.489 [2024-02-13 07:22:55.047659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.489 [2024-02-13 07:22:55.047733] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:22:21.489 [2024-02-13 07:22:55.047983] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.489 [2024-02-13 07:22:55.049955] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.489 [2024-02-13 07:22:55.050110] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:21.489 BaseBdev3 00:22:21.489 07:22:55 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:22:21.489 07:22:55 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:22:21.489 07:22:55 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:22:21.748 BaseBdev4_malloc 00:22:21.748 07:22:55 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:21.748 [2024-02-13 07:22:55.435133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:21.748 [2024-02-13 07:22:55.435360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:21.748 [2024-02-13 07:22:55.435426] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:22:21.748 [2024-02-13 07:22:55.435545] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:21.748 [2024-02-13 07:22:55.437691] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:21.748 [2024-02-13 07:22:55.437846] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:21.748 BaseBdev4 00:22:22.007 07:22:55 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:22:22.007 spare_malloc 00:22:22.007 07:22:55 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:22:22.266 spare_delay 00:22:22.266 07:22:55 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:22.525 [2024-02-13 07:22:55.992916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:22.525 [2024-02-13 07:22:55.993148] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:22.525 [2024-02-13 07:22:55.993214] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:22:22.525 [2024-02-13 07:22:55.993342] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:22.525 [2024-02-13 07:22:55.995318] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:22.525 [2024-02-13 07:22:55.995479] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:22.525 spare 00:22:22.525 07:22:56 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:22:22.525 [2024-02-13 07:22:56.177048] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:22.525 [2024-02-13 07:22:56.178724] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:22.525 [2024-02-13 07:22:56.178911] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:22.525 [2024-02-13 07:22:56.179082] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:22.525 [2024-02-13 07:22:56.179390] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:22:22.525 [2024-02-13 07:22:56.179518] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:22.525 [2024-02-13 07:22:56.179714] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:22:22.525 [2024-02-13 07:22:56.180139] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:22:22.525 [2024-02-13 07:22:56.180249] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:22:22.525 [2024-02-13 07:22:56.180462] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:22.525 07:22:56 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:22:22.525 07:22:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:22.525 07:22:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:22.525 07:22:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:22.525 07:22:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:22.525 07:22:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:22:22.525 07:22:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:22.525 07:22:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:22.525 07:22:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:22.525 07:22:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:22.525 07:22:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:22.525 07:22:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:22.784 07:22:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:22.784 "name": "raid_bdev1", 00:22:22.784 "uuid": "ce035012-665d-4e41-beef-be712989ee70", 00:22:22.784 "strip_size_kb": 0, 00:22:22.784 "state": "online", 00:22:22.784 "raid_level": "raid1", 00:22:22.784 "superblock": true, 00:22:22.784 "num_base_bdevs": 4, 00:22:22.784 "num_base_bdevs_discovered": 4, 00:22:22.784 "num_base_bdevs_operational": 4, 00:22:22.784 "base_bdevs_list": [ 00:22:22.784 { 00:22:22.784 "name": "BaseBdev1", 00:22:22.784 "uuid": "0b459014-d580-5482-9ac2-cd5683db3423", 00:22:22.784 "is_configured": true, 00:22:22.784 "data_offset": 2048, 00:22:22.784 "data_size": 63488 00:22:22.784 }, 00:22:22.784 { 00:22:22.784 "name": "BaseBdev2", 00:22:22.784 "uuid": "8b53007b-9ea4-5b3e-88ac-becde902b6ba", 00:22:22.784 "is_configured": true, 00:22:22.784 "data_offset": 2048, 00:22:22.784 "data_size": 63488 00:22:22.784 }, 00:22:22.784 { 00:22:22.784 "name": "BaseBdev3", 00:22:22.784 "uuid": "b73113ee-142e-5cd1-9f71-330a52cfc8a6", 00:22:22.784 "is_configured": true, 00:22:22.784 "data_offset": 2048, 00:22:22.784 "data_size": 63488 00:22:22.784 }, 00:22:22.784 { 00:22:22.784 "name": "BaseBdev4", 00:22:22.784 "uuid": "c422b460-5c8b-512d-8586-8b7b7cd576f6", 00:22:22.784 "is_configured": true, 00:22:22.784 "data_offset": 2048, 00:22:22.784 "data_size": 63488 00:22:22.784 } 00:22:22.784 ] 00:22:22.784 }' 00:22:22.784 07:22:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:22.784 07:22:56 -- common/autotest_common.sh@10 -- # set +x 00:22:23.351 07:22:57 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:22:23.351 07:22:57 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:22:23.610 [2024-02-13 07:22:57.285369] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:22:23.610 07:22:57 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:22:23.610 07:22:57 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:23.610 07:22:57 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:22:23.868 07:22:57 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:22:23.868 07:22:57 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:22:23.868 07:22:57 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:22:23.868 07:22:57 -- bdev/bdev_raid.sh@574 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:22:24.127 [2024-02-13 07:22:57.583671] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:22:24.127 I/O size of 3145728 is greater than zero copy threshold (65536). 00:22:24.127 Zero copy mechanism will not be used. 00:22:24.127 Running I/O for 60 seconds... 00:22:24.127 [2024-02-13 07:22:57.666778] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:24.127 [2024-02-13 07:22:57.678736] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:22:24.127 07:22:57 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:24.127 07:22:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:24.127 07:22:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:24.127 07:22:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:24.127 07:22:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:24.127 07:22:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:24.127 07:22:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:24.127 07:22:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:24.127 07:22:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:24.127 07:22:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:24.127 07:22:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:24.127 07:22:57 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:24.385 07:22:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:24.385 "name": "raid_bdev1", 00:22:24.385 "uuid": "ce035012-665d-4e41-beef-be712989ee70", 00:22:24.385 "strip_size_kb": 0, 00:22:24.385 "state": "online", 00:22:24.385 "raid_level": "raid1", 00:22:24.385 "superblock": true, 00:22:24.385 "num_base_bdevs": 4, 00:22:24.385 "num_base_bdevs_discovered": 3, 00:22:24.385 "num_base_bdevs_operational": 3, 00:22:24.385 "base_bdevs_list": [ 00:22:24.385 { 00:22:24.385 "name": null, 00:22:24.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:24.385 "is_configured": false, 00:22:24.385 "data_offset": 2048, 00:22:24.385 "data_size": 63488 00:22:24.385 }, 00:22:24.385 { 00:22:24.385 "name": "BaseBdev2", 00:22:24.385 "uuid": "8b53007b-9ea4-5b3e-88ac-becde902b6ba", 00:22:24.385 "is_configured": true, 00:22:24.385 "data_offset": 2048, 00:22:24.385 "data_size": 63488 00:22:24.385 }, 00:22:24.385 { 00:22:24.385 "name": "BaseBdev3", 00:22:24.385 "uuid": "b73113ee-142e-5cd1-9f71-330a52cfc8a6", 00:22:24.385 "is_configured": true, 00:22:24.385 "data_offset": 2048, 00:22:24.385 "data_size": 63488 00:22:24.385 }, 00:22:24.385 { 00:22:24.385 "name": "BaseBdev4", 00:22:24.385 "uuid": "c422b460-5c8b-512d-8586-8b7b7cd576f6", 00:22:24.385 "is_configured": true, 00:22:24.385 "data_offset": 2048, 00:22:24.385 "data_size": 63488 00:22:24.385 } 00:22:24.385 ] 00:22:24.385 }' 00:22:24.385 07:22:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:24.385 07:22:57 -- common/autotest_common.sh@10 -- # set +x 00:22:24.952 07:22:58 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:25.211 [2024-02-13 07:22:58.754704] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:25.211 [2024-02-13 07:22:58.755022] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:25.211 07:22:58 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:22:25.211 [2024-02-13 07:22:58.817851] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:22:25.211 [2024-02-13 07:22:58.819906] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:25.470 [2024-02-13 07:22:58.943778] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:25.470 [2024-02-13 07:22:58.944609] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:25.728 [2024-02-13 07:22:59.170033] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:25.728 [2024-02-13 07:22:59.171091] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:25.987 [2024-02-13 07:22:59.511487] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:25.987 [2024-02-13 07:22:59.636892] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:25.987 [2024-02-13 07:22:59.637754] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:26.245 07:22:59 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:26.245 07:22:59 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:26.245 07:22:59 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:26.245 07:22:59 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:26.245 07:22:59 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:26.245 07:22:59 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:26.245 07:22:59 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:26.504 [2024-02-13 07:22:59.965999] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:26.504 07:23:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:26.504 "name": "raid_bdev1", 00:22:26.504 "uuid": "ce035012-665d-4e41-beef-be712989ee70", 00:22:26.504 "strip_size_kb": 0, 00:22:26.504 "state": "online", 00:22:26.504 "raid_level": "raid1", 00:22:26.504 "superblock": true, 00:22:26.504 "num_base_bdevs": 4, 00:22:26.504 "num_base_bdevs_discovered": 4, 00:22:26.504 "num_base_bdevs_operational": 4, 00:22:26.504 "process": { 00:22:26.504 "type": "rebuild", 00:22:26.504 "target": "spare", 00:22:26.504 "progress": { 00:22:26.504 "blocks": 14336, 00:22:26.504 "percent": 22 00:22:26.504 } 00:22:26.504 }, 00:22:26.504 "base_bdevs_list": [ 00:22:26.504 { 00:22:26.504 "name": "spare", 00:22:26.504 "uuid": "39eccc50-a308-5111-ad87-03f18b05a838", 00:22:26.504 "is_configured": true, 00:22:26.504 "data_offset": 2048, 00:22:26.504 "data_size": 63488 00:22:26.504 }, 00:22:26.504 { 00:22:26.504 "name": "BaseBdev2", 00:22:26.504 "uuid": "8b53007b-9ea4-5b3e-88ac-becde902b6ba", 00:22:26.504 "is_configured": true, 00:22:26.504 "data_offset": 2048, 00:22:26.504 "data_size": 63488 00:22:26.504 }, 00:22:26.504 { 00:22:26.504 "name": "BaseBdev3", 00:22:26.504 "uuid": "b73113ee-142e-5cd1-9f71-330a52cfc8a6", 00:22:26.504 "is_configured": true, 00:22:26.504 "data_offset": 2048, 00:22:26.504 "data_size": 63488 00:22:26.504 }, 00:22:26.504 { 00:22:26.504 "name": "BaseBdev4", 00:22:26.504 "uuid": "c422b460-5c8b-512d-8586-8b7b7cd576f6", 00:22:26.504 "is_configured": true, 00:22:26.504 "data_offset": 2048, 00:22:26.504 "data_size": 63488 00:22:26.504 } 00:22:26.504 ] 00:22:26.504 }' 00:22:26.504 07:23:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:26.504 [2024-02-13 07:23:00.083349] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:26.504 07:23:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:26.504 07:23:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:26.504 07:23:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:26.504 07:23:00 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:22:26.763 [2024-02-13 07:23:00.332458] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:22:26.763 [2024-02-13 07:23:00.427376] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:22:26.763 [2024-02-13 07:23:00.448329] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:27.021 [2024-02-13 07:23:00.556656] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:22:27.021 [2024-02-13 07:23:00.566809] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:27.021 [2024-02-13 07:23:00.599460] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005c70 00:22:27.021 07:23:00 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:27.021 07:23:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:27.021 07:23:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:27.021 07:23:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:27.021 07:23:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:27.021 07:23:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:27.021 07:23:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:27.021 07:23:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:27.021 07:23:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:27.021 07:23:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:27.021 07:23:00 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:27.021 07:23:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:27.280 07:23:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:27.280 "name": "raid_bdev1", 00:22:27.280 "uuid": "ce035012-665d-4e41-beef-be712989ee70", 00:22:27.280 "strip_size_kb": 0, 00:22:27.280 "state": "online", 00:22:27.280 "raid_level": "raid1", 00:22:27.280 "superblock": true, 00:22:27.280 "num_base_bdevs": 4, 00:22:27.280 "num_base_bdevs_discovered": 3, 00:22:27.280 "num_base_bdevs_operational": 3, 00:22:27.280 "base_bdevs_list": [ 00:22:27.280 { 00:22:27.280 "name": null, 00:22:27.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:27.280 "is_configured": false, 00:22:27.280 "data_offset": 2048, 00:22:27.280 "data_size": 63488 00:22:27.280 }, 00:22:27.280 { 00:22:27.280 "name": "BaseBdev2", 00:22:27.280 "uuid": "8b53007b-9ea4-5b3e-88ac-becde902b6ba", 00:22:27.280 "is_configured": true, 00:22:27.280 "data_offset": 2048, 00:22:27.280 "data_size": 63488 00:22:27.280 }, 00:22:27.280 { 00:22:27.280 "name": "BaseBdev3", 00:22:27.280 "uuid": "b73113ee-142e-5cd1-9f71-330a52cfc8a6", 00:22:27.280 "is_configured": true, 00:22:27.280 "data_offset": 2048, 00:22:27.280 "data_size": 63488 00:22:27.280 }, 00:22:27.280 { 00:22:27.280 "name": "BaseBdev4", 00:22:27.280 "uuid": "c422b460-5c8b-512d-8586-8b7b7cd576f6", 00:22:27.280 "is_configured": true, 00:22:27.280 "data_offset": 2048, 00:22:27.280 "data_size": 63488 00:22:27.280 } 00:22:27.280 ] 00:22:27.280 }' 00:22:27.280 07:23:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:27.280 07:23:00 -- common/autotest_common.sh@10 -- # set +x 00:22:28.215 07:23:01 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:28.215 07:23:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:28.215 07:23:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:28.215 07:23:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:28.215 07:23:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:28.215 07:23:01 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:28.215 07:23:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:28.215 07:23:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:28.215 "name": "raid_bdev1", 00:22:28.215 "uuid": "ce035012-665d-4e41-beef-be712989ee70", 00:22:28.215 "strip_size_kb": 0, 00:22:28.215 "state": "online", 00:22:28.215 "raid_level": "raid1", 00:22:28.215 "superblock": true, 00:22:28.215 "num_base_bdevs": 4, 00:22:28.215 "num_base_bdevs_discovered": 3, 00:22:28.215 "num_base_bdevs_operational": 3, 00:22:28.215 "base_bdevs_list": [ 00:22:28.215 { 00:22:28.215 "name": null, 00:22:28.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:28.215 "is_configured": false, 00:22:28.215 "data_offset": 2048, 00:22:28.215 "data_size": 63488 00:22:28.215 }, 00:22:28.215 { 00:22:28.215 "name": "BaseBdev2", 00:22:28.215 "uuid": "8b53007b-9ea4-5b3e-88ac-becde902b6ba", 00:22:28.215 "is_configured": true, 00:22:28.215 "data_offset": 2048, 00:22:28.215 "data_size": 63488 00:22:28.215 }, 00:22:28.215 { 00:22:28.215 "name": "BaseBdev3", 00:22:28.215 "uuid": "b73113ee-142e-5cd1-9f71-330a52cfc8a6", 00:22:28.215 "is_configured": true, 00:22:28.215 "data_offset": 2048, 00:22:28.215 "data_size": 63488 00:22:28.215 }, 00:22:28.215 { 00:22:28.215 "name": "BaseBdev4", 00:22:28.215 "uuid": "c422b460-5c8b-512d-8586-8b7b7cd576f6", 00:22:28.215 "is_configured": true, 00:22:28.215 "data_offset": 2048, 00:22:28.215 "data_size": 63488 00:22:28.215 } 00:22:28.215 ] 00:22:28.215 }' 00:22:28.215 07:23:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:28.473 07:23:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:28.473 07:23:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:28.473 07:23:02 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:28.473 07:23:02 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:22:28.732 [2024-02-13 07:23:02.247186] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:22:28.732 [2024-02-13 07:23:02.247400] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:28.732 07:23:02 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:22:28.732 [2024-02-13 07:23:02.302722] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:22:28.732 [2024-02-13 07:23:02.305058] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:22:28.990 [2024-02-13 07:23:02.429078] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:28.990 [2024-02-13 07:23:02.430545] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:22:28.990 [2024-02-13 07:23:02.654696] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:28.990 [2024-02-13 07:23:02.655145] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:22:29.249 [2024-02-13 07:23:02.899327] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:22:29.507 [2024-02-13 07:23:03.016535] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:22:29.766 [2024-02-13 07:23:03.259093] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:22:29.766 07:23:03 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:29.766 07:23:03 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:29.766 07:23:03 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:29.766 07:23:03 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:29.766 07:23:03 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:29.766 07:23:03 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:29.766 07:23:03 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:29.766 [2024-02-13 07:23:03.376240] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:22:30.025 07:23:03 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:30.025 "name": "raid_bdev1", 00:22:30.025 "uuid": "ce035012-665d-4e41-beef-be712989ee70", 00:22:30.025 "strip_size_kb": 0, 00:22:30.025 "state": "online", 00:22:30.025 "raid_level": "raid1", 00:22:30.025 "superblock": true, 00:22:30.025 "num_base_bdevs": 4, 00:22:30.025 "num_base_bdevs_discovered": 4, 00:22:30.025 "num_base_bdevs_operational": 4, 00:22:30.025 "process": { 00:22:30.025 "type": "rebuild", 00:22:30.025 "target": "spare", 00:22:30.025 "progress": { 00:22:30.025 "blocks": 18432, 00:22:30.025 "percent": 29 00:22:30.025 } 00:22:30.025 }, 00:22:30.025 "base_bdevs_list": [ 00:22:30.025 { 00:22:30.025 "name": "spare", 00:22:30.025 "uuid": "39eccc50-a308-5111-ad87-03f18b05a838", 00:22:30.025 "is_configured": true, 00:22:30.025 "data_offset": 2048, 00:22:30.025 "data_size": 63488 00:22:30.025 }, 00:22:30.025 { 00:22:30.025 "name": "BaseBdev2", 00:22:30.025 "uuid": "8b53007b-9ea4-5b3e-88ac-becde902b6ba", 00:22:30.025 "is_configured": true, 00:22:30.025 "data_offset": 2048, 00:22:30.025 "data_size": 63488 00:22:30.025 }, 00:22:30.025 { 00:22:30.025 "name": "BaseBdev3", 00:22:30.025 "uuid": "b73113ee-142e-5cd1-9f71-330a52cfc8a6", 00:22:30.025 "is_configured": true, 00:22:30.025 "data_offset": 2048, 00:22:30.025 "data_size": 63488 00:22:30.025 }, 00:22:30.025 { 00:22:30.025 "name": "BaseBdev4", 00:22:30.025 "uuid": "c422b460-5c8b-512d-8586-8b7b7cd576f6", 00:22:30.025 "is_configured": true, 00:22:30.025 "data_offset": 2048, 00:22:30.025 "data_size": 63488 00:22:30.025 } 00:22:30.025 ] 00:22:30.025 }' 00:22:30.025 07:23:03 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:30.025 07:23:03 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:30.025 07:23:03 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:30.025 07:23:03 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:30.025 07:23:03 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:22:30.025 07:23:03 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:22:30.025 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:22:30.025 07:23:03 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:22:30.025 07:23:03 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:22:30.025 07:23:03 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:22:30.025 07:23:03 -- bdev/bdev_raid.sh@646 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:22:30.283 [2024-02-13 07:23:03.742093] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:22:30.283 [2024-02-13 07:23:03.867127] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:30.542 [2024-02-13 07:23:04.010539] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005c70 00:22:30.542 [2024-02-13 07:23:04.010852] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000005ee0 00:22:30.542 [2024-02-13 07:23:04.130421] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:22:30.542 07:23:04 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:22:30.542 07:23:04 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:22:30.542 07:23:04 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:30.542 07:23:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:30.542 07:23:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:30.542 07:23:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:30.542 07:23:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:30.542 07:23:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.542 07:23:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:30.801 [2024-02-13 07:23:04.352168] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:22:30.801 07:23:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:30.801 "name": "raid_bdev1", 00:22:30.801 "uuid": "ce035012-665d-4e41-beef-be712989ee70", 00:22:30.801 "strip_size_kb": 0, 00:22:30.801 "state": "online", 00:22:30.801 "raid_level": "raid1", 00:22:30.801 "superblock": true, 00:22:30.801 "num_base_bdevs": 4, 00:22:30.801 "num_base_bdevs_discovered": 3, 00:22:30.801 "num_base_bdevs_operational": 3, 00:22:30.801 "process": { 00:22:30.801 "type": "rebuild", 00:22:30.801 "target": "spare", 00:22:30.801 "progress": { 00:22:30.801 "blocks": 28672, 00:22:30.801 "percent": 45 00:22:30.801 } 00:22:30.801 }, 00:22:30.801 "base_bdevs_list": [ 00:22:30.801 { 00:22:30.801 "name": "spare", 00:22:30.801 "uuid": "39eccc50-a308-5111-ad87-03f18b05a838", 00:22:30.801 "is_configured": true, 00:22:30.801 "data_offset": 2048, 00:22:30.801 "data_size": 63488 00:22:30.801 }, 00:22:30.801 { 00:22:30.801 "name": null, 00:22:30.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:30.801 "is_configured": false, 00:22:30.801 "data_offset": 2048, 00:22:30.801 "data_size": 63488 00:22:30.801 }, 00:22:30.801 { 00:22:30.801 "name": "BaseBdev3", 00:22:30.801 "uuid": "b73113ee-142e-5cd1-9f71-330a52cfc8a6", 00:22:30.801 "is_configured": true, 00:22:30.801 "data_offset": 2048, 00:22:30.801 "data_size": 63488 00:22:30.801 }, 00:22:30.801 { 00:22:30.801 "name": "BaseBdev4", 00:22:30.801 "uuid": "c422b460-5c8b-512d-8586-8b7b7cd576f6", 00:22:30.801 "is_configured": true, 00:22:30.801 "data_offset": 2048, 00:22:30.801 "data_size": 63488 00:22:30.801 } 00:22:30.801 ] 00:22:30.801 }' 00:22:30.801 07:23:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:30.801 07:23:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:30.801 07:23:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:30.801 07:23:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:30.801 07:23:04 -- bdev/bdev_raid.sh@657 -- # local timeout=565 00:22:30.801 07:23:04 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:30.801 07:23:04 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:30.801 07:23:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:30.801 07:23:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:30.801 07:23:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:30.801 07:23:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:30.801 07:23:04 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:30.801 07:23:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:31.059 [2024-02-13 07:23:04.607737] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:22:31.059 07:23:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:31.059 "name": "raid_bdev1", 00:22:31.059 "uuid": "ce035012-665d-4e41-beef-be712989ee70", 00:22:31.059 "strip_size_kb": 0, 00:22:31.059 "state": "online", 00:22:31.059 "raid_level": "raid1", 00:22:31.059 "superblock": true, 00:22:31.059 "num_base_bdevs": 4, 00:22:31.059 "num_base_bdevs_discovered": 3, 00:22:31.059 "num_base_bdevs_operational": 3, 00:22:31.059 "process": { 00:22:31.059 "type": "rebuild", 00:22:31.059 "target": "spare", 00:22:31.059 "progress": { 00:22:31.059 "blocks": 32768, 00:22:31.059 "percent": 51 00:22:31.059 } 00:22:31.059 }, 00:22:31.059 "base_bdevs_list": [ 00:22:31.059 { 00:22:31.059 "name": "spare", 00:22:31.059 "uuid": "39eccc50-a308-5111-ad87-03f18b05a838", 00:22:31.059 "is_configured": true, 00:22:31.059 "data_offset": 2048, 00:22:31.059 "data_size": 63488 00:22:31.059 }, 00:22:31.059 { 00:22:31.059 "name": null, 00:22:31.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:31.059 "is_configured": false, 00:22:31.059 "data_offset": 2048, 00:22:31.059 "data_size": 63488 00:22:31.059 }, 00:22:31.059 { 00:22:31.059 "name": "BaseBdev3", 00:22:31.059 "uuid": "b73113ee-142e-5cd1-9f71-330a52cfc8a6", 00:22:31.059 "is_configured": true, 00:22:31.059 "data_offset": 2048, 00:22:31.059 "data_size": 63488 00:22:31.059 }, 00:22:31.059 { 00:22:31.059 "name": "BaseBdev4", 00:22:31.059 "uuid": "c422b460-5c8b-512d-8586-8b7b7cd576f6", 00:22:31.059 "is_configured": true, 00:22:31.059 "data_offset": 2048, 00:22:31.059 "data_size": 63488 00:22:31.059 } 00:22:31.059 ] 00:22:31.059 }' 00:22:31.059 07:23:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:31.059 07:23:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:31.059 07:23:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:31.317 07:23:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:31.317 07:23:04 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:31.575 [2024-02-13 07:23:05.133102] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:22:31.864 [2024-02-13 07:23:05.478173] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:22:32.184 07:23:05 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:32.184 07:23:05 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:32.184 07:23:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:32.184 07:23:05 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:32.184 07:23:05 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:32.184 07:23:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:32.184 07:23:05 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:32.184 07:23:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:32.442 [2024-02-13 07:23:06.025917] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:22:32.442 07:23:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:32.442 "name": "raid_bdev1", 00:22:32.442 "uuid": "ce035012-665d-4e41-beef-be712989ee70", 00:22:32.442 "strip_size_kb": 0, 00:22:32.442 "state": "online", 00:22:32.442 "raid_level": "raid1", 00:22:32.442 "superblock": true, 00:22:32.442 "num_base_bdevs": 4, 00:22:32.442 "num_base_bdevs_discovered": 3, 00:22:32.442 "num_base_bdevs_operational": 3, 00:22:32.442 "process": { 00:22:32.442 "type": "rebuild", 00:22:32.442 "target": "spare", 00:22:32.442 "progress": { 00:22:32.442 "blocks": 57344, 00:22:32.442 "percent": 90 00:22:32.442 } 00:22:32.442 }, 00:22:32.442 "base_bdevs_list": [ 00:22:32.442 { 00:22:32.442 "name": "spare", 00:22:32.442 "uuid": "39eccc50-a308-5111-ad87-03f18b05a838", 00:22:32.442 "is_configured": true, 00:22:32.442 "data_offset": 2048, 00:22:32.442 "data_size": 63488 00:22:32.442 }, 00:22:32.442 { 00:22:32.442 "name": null, 00:22:32.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:32.442 "is_configured": false, 00:22:32.442 "data_offset": 2048, 00:22:32.442 "data_size": 63488 00:22:32.442 }, 00:22:32.442 { 00:22:32.442 "name": "BaseBdev3", 00:22:32.442 "uuid": "b73113ee-142e-5cd1-9f71-330a52cfc8a6", 00:22:32.442 "is_configured": true, 00:22:32.442 "data_offset": 2048, 00:22:32.442 "data_size": 63488 00:22:32.442 }, 00:22:32.442 { 00:22:32.442 "name": "BaseBdev4", 00:22:32.442 "uuid": "c422b460-5c8b-512d-8586-8b7b7cd576f6", 00:22:32.442 "is_configured": true, 00:22:32.442 "data_offset": 2048, 00:22:32.442 "data_size": 63488 00:22:32.442 } 00:22:32.442 ] 00:22:32.442 }' 00:22:32.442 07:23:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:32.442 07:23:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:22:32.442 07:23:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:32.700 [2024-02-13 07:23:06.140054] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:22:32.700 07:23:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:22:32.700 07:23:06 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:22:32.958 [2024-02-13 07:23:06.464790] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:22:32.958 [2024-02-13 07:23:06.570486] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:22:32.958 [2024-02-13 07:23:06.572666] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:33.524 07:23:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:22:33.524 07:23:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:22:33.524 07:23:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:33.524 07:23:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:22:33.524 07:23:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:22:33.525 07:23:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:33.525 07:23:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:33.525 07:23:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:33.782 07:23:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:33.782 "name": "raid_bdev1", 00:22:33.782 "uuid": "ce035012-665d-4e41-beef-be712989ee70", 00:22:33.782 "strip_size_kb": 0, 00:22:33.782 "state": "online", 00:22:33.782 "raid_level": "raid1", 00:22:33.782 "superblock": true, 00:22:33.782 "num_base_bdevs": 4, 00:22:33.782 "num_base_bdevs_discovered": 3, 00:22:33.782 "num_base_bdevs_operational": 3, 00:22:33.782 "base_bdevs_list": [ 00:22:33.782 { 00:22:33.782 "name": "spare", 00:22:33.782 "uuid": "39eccc50-a308-5111-ad87-03f18b05a838", 00:22:33.782 "is_configured": true, 00:22:33.782 "data_offset": 2048, 00:22:33.782 "data_size": 63488 00:22:33.782 }, 00:22:33.782 { 00:22:33.782 "name": null, 00:22:33.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:33.782 "is_configured": false, 00:22:33.782 "data_offset": 2048, 00:22:33.782 "data_size": 63488 00:22:33.782 }, 00:22:33.782 { 00:22:33.782 "name": "BaseBdev3", 00:22:33.782 "uuid": "b73113ee-142e-5cd1-9f71-330a52cfc8a6", 00:22:33.783 "is_configured": true, 00:22:33.783 "data_offset": 2048, 00:22:33.783 "data_size": 63488 00:22:33.783 }, 00:22:33.783 { 00:22:33.783 "name": "BaseBdev4", 00:22:33.783 "uuid": "c422b460-5c8b-512d-8586-8b7b7cd576f6", 00:22:33.783 "is_configured": true, 00:22:33.783 "data_offset": 2048, 00:22:33.783 "data_size": 63488 00:22:33.783 } 00:22:33.783 ] 00:22:33.783 }' 00:22:33.783 07:23:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:33.783 07:23:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:22:33.783 07:23:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:34.041 07:23:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:22:34.041 07:23:07 -- bdev/bdev_raid.sh@660 -- # break 00:22:34.041 07:23:07 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:34.041 07:23:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:34.041 07:23:07 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:34.041 07:23:07 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:34.041 07:23:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:34.041 07:23:07 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.041 07:23:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.041 07:23:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:34.041 "name": "raid_bdev1", 00:22:34.041 "uuid": "ce035012-665d-4e41-beef-be712989ee70", 00:22:34.041 "strip_size_kb": 0, 00:22:34.041 "state": "online", 00:22:34.041 "raid_level": "raid1", 00:22:34.041 "superblock": true, 00:22:34.041 "num_base_bdevs": 4, 00:22:34.041 "num_base_bdevs_discovered": 3, 00:22:34.041 "num_base_bdevs_operational": 3, 00:22:34.041 "base_bdevs_list": [ 00:22:34.041 { 00:22:34.041 "name": "spare", 00:22:34.041 "uuid": "39eccc50-a308-5111-ad87-03f18b05a838", 00:22:34.041 "is_configured": true, 00:22:34.041 "data_offset": 2048, 00:22:34.041 "data_size": 63488 00:22:34.041 }, 00:22:34.041 { 00:22:34.041 "name": null, 00:22:34.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.041 "is_configured": false, 00:22:34.041 "data_offset": 2048, 00:22:34.041 "data_size": 63488 00:22:34.041 }, 00:22:34.041 { 00:22:34.041 "name": "BaseBdev3", 00:22:34.041 "uuid": "b73113ee-142e-5cd1-9f71-330a52cfc8a6", 00:22:34.041 "is_configured": true, 00:22:34.041 "data_offset": 2048, 00:22:34.041 "data_size": 63488 00:22:34.041 }, 00:22:34.041 { 00:22:34.041 "name": "BaseBdev4", 00:22:34.041 "uuid": "c422b460-5c8b-512d-8586-8b7b7cd576f6", 00:22:34.041 "is_configured": true, 00:22:34.041 "data_offset": 2048, 00:22:34.041 "data_size": 63488 00:22:34.041 } 00:22:34.041 ] 00:22:34.041 }' 00:22:34.041 07:23:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:34.299 07:23:07 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:34.299 07:23:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:34.299 07:23:07 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:34.299 07:23:07 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:34.299 07:23:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:34.300 07:23:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:34.300 07:23:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:34.300 07:23:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:34.300 07:23:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:34.300 07:23:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:34.300 07:23:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:34.300 07:23:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:34.300 07:23:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:34.300 07:23:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:34.300 07:23:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:34.558 07:23:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:34.558 "name": "raid_bdev1", 00:22:34.558 "uuid": "ce035012-665d-4e41-beef-be712989ee70", 00:22:34.558 "strip_size_kb": 0, 00:22:34.558 "state": "online", 00:22:34.558 "raid_level": "raid1", 00:22:34.558 "superblock": true, 00:22:34.558 "num_base_bdevs": 4, 00:22:34.558 "num_base_bdevs_discovered": 3, 00:22:34.558 "num_base_bdevs_operational": 3, 00:22:34.558 "base_bdevs_list": [ 00:22:34.558 { 00:22:34.558 "name": "spare", 00:22:34.558 "uuid": "39eccc50-a308-5111-ad87-03f18b05a838", 00:22:34.558 "is_configured": true, 00:22:34.558 "data_offset": 2048, 00:22:34.558 "data_size": 63488 00:22:34.558 }, 00:22:34.558 { 00:22:34.558 "name": null, 00:22:34.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:34.558 "is_configured": false, 00:22:34.558 "data_offset": 2048, 00:22:34.558 "data_size": 63488 00:22:34.558 }, 00:22:34.558 { 00:22:34.558 "name": "BaseBdev3", 00:22:34.558 "uuid": "b73113ee-142e-5cd1-9f71-330a52cfc8a6", 00:22:34.558 "is_configured": true, 00:22:34.558 "data_offset": 2048, 00:22:34.558 "data_size": 63488 00:22:34.558 }, 00:22:34.558 { 00:22:34.558 "name": "BaseBdev4", 00:22:34.558 "uuid": "c422b460-5c8b-512d-8586-8b7b7cd576f6", 00:22:34.558 "is_configured": true, 00:22:34.558 "data_offset": 2048, 00:22:34.558 "data_size": 63488 00:22:34.558 } 00:22:34.558 ] 00:22:34.558 }' 00:22:34.558 07:23:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:34.558 07:23:08 -- common/autotest_common.sh@10 -- # set +x 00:22:35.124 07:23:08 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:22:35.124 [2024-02-13 07:23:08.802452] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:35.124 [2024-02-13 07:23:08.802652] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:35.383 00:22:35.383 Latency(us) 00:22:35.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.383 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:22:35.383 raid_bdev1 : 11.26 110.17 330.51 0.00 0.00 13168.18 307.20 116296.61 00:22:35.383 =================================================================================================================== 00:22:35.383 Total : 110.17 330.51 0.00 0.00 13168.18 307.20 116296.61 00:22:35.383 [2024-02-13 07:23:08.865108] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:35.383 [2024-02-13 07:23:08.865273] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:35.383 [2024-02-13 07:23:08.865410] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:35.383 [2024-02-13 07:23:08.865605] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:22:35.383 0 00:22:35.383 07:23:08 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:35.383 07:23:08 -- bdev/bdev_raid.sh@671 -- # jq length 00:22:35.642 07:23:09 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:22:35.642 07:23:09 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:22:35.642 07:23:09 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:22:35.642 07:23:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:35.642 07:23:09 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:35.642 07:23:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:35.642 07:23:09 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:35.642 07:23:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:35.642 07:23:09 -- bdev/nbd_common.sh@12 -- # local i 00:22:35.642 07:23:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:35.642 07:23:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:35.642 07:23:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:22:35.642 /dev/nbd0 00:22:35.900 07:23:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:35.900 07:23:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:35.900 07:23:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:22:35.900 07:23:09 -- common/autotest_common.sh@855 -- # local i 00:22:35.900 07:23:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:35.900 07:23:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:35.900 07:23:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:22:35.900 07:23:09 -- common/autotest_common.sh@859 -- # break 00:22:35.900 07:23:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:35.900 07:23:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:35.900 07:23:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:35.900 1+0 records in 00:22:35.900 1+0 records out 00:22:35.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000756585 s, 5.4 MB/s 00:22:35.900 07:23:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.900 07:23:09 -- common/autotest_common.sh@872 -- # size=4096 00:22:35.900 07:23:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.900 07:23:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:35.900 07:23:09 -- common/autotest_common.sh@875 -- # return 0 00:22:35.900 07:23:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:35.900 07:23:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:35.900 07:23:09 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:35.900 07:23:09 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:22:35.900 07:23:09 -- bdev/bdev_raid.sh@678 -- # continue 00:22:35.900 07:23:09 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:35.900 07:23:09 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:22:35.900 07:23:09 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:22:35.900 07:23:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:35.900 07:23:09 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:35.900 07:23:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:35.900 07:23:09 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:35.900 07:23:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:35.900 07:23:09 -- bdev/nbd_common.sh@12 -- # local i 00:22:35.900 07:23:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:35.900 07:23:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:35.900 07:23:09 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:22:35.900 /dev/nbd1 00:22:35.900 07:23:09 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:35.900 07:23:09 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:35.900 07:23:09 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:22:35.900 07:23:09 -- common/autotest_common.sh@855 -- # local i 00:22:35.900 07:23:09 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:35.900 07:23:09 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:35.900 07:23:09 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:22:35.900 07:23:09 -- common/autotest_common.sh@859 -- # break 00:22:35.900 07:23:09 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:35.900 07:23:09 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:35.900 07:23:09 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:35.900 1+0 records in 00:22:35.900 1+0 records out 00:22:35.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050307 s, 8.1 MB/s 00:22:35.900 07:23:09 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.900 07:23:09 -- common/autotest_common.sh@872 -- # size=4096 00:22:35.900 07:23:09 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:35.900 07:23:09 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:35.900 07:23:09 -- common/autotest_common.sh@875 -- # return 0 00:22:35.901 07:23:09 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:35.901 07:23:09 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:35.901 07:23:09 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:36.159 07:23:09 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:36.159 07:23:09 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:36.159 07:23:09 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:36.159 07:23:09 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:36.159 07:23:09 -- bdev/nbd_common.sh@51 -- # local i 00:22:36.159 07:23:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:36.159 07:23:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:36.418 07:23:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:36.418 07:23:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:36.418 07:23:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:36.418 07:23:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:36.418 07:23:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:36.418 07:23:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:36.418 07:23:10 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:36.676 07:23:10 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:36.676 07:23:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:36.676 07:23:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:36.676 07:23:10 -- bdev/nbd_common.sh@41 -- # break 00:22:36.676 07:23:10 -- bdev/nbd_common.sh@45 -- # return 0 00:22:36.676 07:23:10 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:22:36.676 07:23:10 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:22:36.676 07:23:10 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:22:36.676 07:23:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:36.676 07:23:10 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:22:36.676 07:23:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:36.676 07:23:10 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:22:36.676 07:23:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:36.676 07:23:10 -- bdev/nbd_common.sh@12 -- # local i 00:22:36.676 07:23:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:36.676 07:23:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:36.676 07:23:10 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:22:36.676 /dev/nbd1 00:22:36.676 07:23:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:36.676 07:23:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:36.676 07:23:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:22:36.676 07:23:10 -- common/autotest_common.sh@855 -- # local i 00:22:36.676 07:23:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:22:36.676 07:23:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:22:36.676 07:23:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:22:36.676 07:23:10 -- common/autotest_common.sh@859 -- # break 00:22:36.676 07:23:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:22:36.676 07:23:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:22:36.676 07:23:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:22:36.676 1+0 records in 00:22:36.677 1+0 records out 00:22:36.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439652 s, 9.3 MB/s 00:22:36.677 07:23:10 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.677 07:23:10 -- common/autotest_common.sh@872 -- # size=4096 00:22:36.677 07:23:10 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:22:36.677 07:23:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:22:36.677 07:23:10 -- common/autotest_common.sh@875 -- # return 0 00:22:36.677 07:23:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:36.677 07:23:10 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:22:36.677 07:23:10 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:22:36.934 07:23:10 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:22:36.934 07:23:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:36.934 07:23:10 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:36.934 07:23:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:36.934 07:23:10 -- bdev/nbd_common.sh@51 -- # local i 00:22:36.934 07:23:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:36.934 07:23:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@41 -- # break 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@45 -- # return 0 00:22:37.192 07:23:10 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@51 -- # local i 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:37.192 07:23:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:22:37.450 07:23:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:37.450 07:23:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:37.450 07:23:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:37.450 07:23:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:37.450 07:23:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:37.450 07:23:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:37.450 07:23:11 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:22:37.709 07:23:11 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:22:37.709 07:23:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:37.709 07:23:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:37.709 07:23:11 -- bdev/nbd_common.sh@41 -- # break 00:22:37.709 07:23:11 -- bdev/nbd_common.sh@45 -- # return 0 00:22:37.709 07:23:11 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:22:37.709 07:23:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:37.709 07:23:11 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:22:37.709 07:23:11 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:22:37.709 07:23:11 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:22:37.966 [2024-02-13 07:23:11.589701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:22:37.967 [2024-02-13 07:23:11.589913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:37.967 [2024-02-13 07:23:11.589989] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:22:37.967 [2024-02-13 07:23:11.590246] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:37.967 [2024-02-13 07:23:11.592287] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:37.967 [2024-02-13 07:23:11.592475] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:22:37.967 [2024-02-13 07:23:11.592718] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:22:37.967 [2024-02-13 07:23:11.592883] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:37.967 BaseBdev1 00:22:37.967 07:23:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:37.967 07:23:11 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:22:37.967 07:23:11 -- bdev/bdev_raid.sh@696 -- # continue 00:22:37.967 07:23:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:37.967 07:23:11 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:22:37.967 07:23:11 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:22:38.224 07:23:11 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:22:38.482 [2024-02-13 07:23:11.961809] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:22:38.482 [2024-02-13 07:23:11.961988] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.482 [2024-02-13 07:23:11.962053] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:22:38.482 [2024-02-13 07:23:11.962159] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.482 [2024-02-13 07:23:11.962566] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.482 [2024-02-13 07:23:11.962739] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:22:38.482 [2024-02-13 07:23:11.962910] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:22:38.482 [2024-02-13 07:23:11.963002] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:22:38.482 [2024-02-13 07:23:11.963079] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:22:38.482 [2024-02-13 07:23:11.963132] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state configuring 00:22:38.482 [2024-02-13 07:23:11.963369] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:38.482 BaseBdev3 00:22:38.482 07:23:11 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:22:38.482 07:23:11 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:22:38.482 07:23:11 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:22:38.482 07:23:12 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:22:38.739 [2024-02-13 07:23:12.329914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:22:38.739 [2024-02-13 07:23:12.330096] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.739 [2024-02-13 07:23:12.330155] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:22:38.739 [2024-02-13 07:23:12.330269] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:38.739 [2024-02-13 07:23:12.330658] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:38.739 [2024-02-13 07:23:12.330827] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:22:38.739 [2024-02-13 07:23:12.330988] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:22:38.739 [2024-02-13 07:23:12.331090] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:22:38.739 BaseBdev4 00:22:38.739 07:23:12 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:22:38.996 07:23:12 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:22:38.996 [2024-02-13 07:23:12.690041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:22:38.996 [2024-02-13 07:23:12.690229] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:22:38.996 [2024-02-13 07:23:12.690286] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:22:38.996 [2024-02-13 07:23:12.690391] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:22:39.254 [2024-02-13 07:23:12.690885] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:22:39.254 [2024-02-13 07:23:12.691071] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:22:39.254 [2024-02-13 07:23:12.691242] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:22:39.254 [2024-02-13 07:23:12.691387] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:22:39.254 spare 00:22:39.254 07:23:12 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:22:39.254 07:23:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:22:39.254 07:23:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:39.254 07:23:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:22:39.254 07:23:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:22:39.254 07:23:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:39.254 07:23:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:39.254 07:23:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:39.254 07:23:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:39.254 07:23:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:39.254 07:23:12 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:39.254 07:23:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:39.254 [2024-02-13 07:23:12.791523] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c680 00:22:39.254 [2024-02-13 07:23:12.791644] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:22:39.254 [2024-02-13 07:23:12.791782] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00003a220 00:22:39.254 [2024-02-13 07:23:12.792213] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c680 00:22:39.254 [2024-02-13 07:23:12.792330] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c680 00:22:39.254 [2024-02-13 07:23:12.792561] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:39.254 07:23:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:39.254 "name": "raid_bdev1", 00:22:39.254 "uuid": "ce035012-665d-4e41-beef-be712989ee70", 00:22:39.254 "strip_size_kb": 0, 00:22:39.254 "state": "online", 00:22:39.254 "raid_level": "raid1", 00:22:39.254 "superblock": true, 00:22:39.254 "num_base_bdevs": 4, 00:22:39.254 "num_base_bdevs_discovered": 3, 00:22:39.254 "num_base_bdevs_operational": 3, 00:22:39.254 "base_bdevs_list": [ 00:22:39.254 { 00:22:39.254 "name": "spare", 00:22:39.254 "uuid": "39eccc50-a308-5111-ad87-03f18b05a838", 00:22:39.254 "is_configured": true, 00:22:39.254 "data_offset": 2048, 00:22:39.254 "data_size": 63488 00:22:39.254 }, 00:22:39.254 { 00:22:39.254 "name": null, 00:22:39.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:39.254 "is_configured": false, 00:22:39.254 "data_offset": 2048, 00:22:39.254 "data_size": 63488 00:22:39.254 }, 00:22:39.254 { 00:22:39.254 "name": "BaseBdev3", 00:22:39.254 "uuid": "b73113ee-142e-5cd1-9f71-330a52cfc8a6", 00:22:39.254 "is_configured": true, 00:22:39.254 "data_offset": 2048, 00:22:39.254 "data_size": 63488 00:22:39.254 }, 00:22:39.254 { 00:22:39.254 "name": "BaseBdev4", 00:22:39.254 "uuid": "c422b460-5c8b-512d-8586-8b7b7cd576f6", 00:22:39.254 "is_configured": true, 00:22:39.254 "data_offset": 2048, 00:22:39.254 "data_size": 63488 00:22:39.254 } 00:22:39.254 ] 00:22:39.254 }' 00:22:39.254 07:23:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:39.254 07:23:12 -- common/autotest_common.sh@10 -- # set +x 00:22:40.187 07:23:13 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:22:40.187 07:23:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:22:40.187 07:23:13 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:22:40.187 07:23:13 -- bdev/bdev_raid.sh@185 -- # local target=none 00:22:40.187 07:23:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:22:40.187 07:23:13 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.187 07:23:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:22:40.187 07:23:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:22:40.187 "name": "raid_bdev1", 00:22:40.187 "uuid": "ce035012-665d-4e41-beef-be712989ee70", 00:22:40.187 "strip_size_kb": 0, 00:22:40.187 "state": "online", 00:22:40.187 "raid_level": "raid1", 00:22:40.187 "superblock": true, 00:22:40.187 "num_base_bdevs": 4, 00:22:40.187 "num_base_bdevs_discovered": 3, 00:22:40.187 "num_base_bdevs_operational": 3, 00:22:40.187 "base_bdevs_list": [ 00:22:40.187 { 00:22:40.187 "name": "spare", 00:22:40.187 "uuid": "39eccc50-a308-5111-ad87-03f18b05a838", 00:22:40.187 "is_configured": true, 00:22:40.187 "data_offset": 2048, 00:22:40.187 "data_size": 63488 00:22:40.187 }, 00:22:40.187 { 00:22:40.187 "name": null, 00:22:40.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:40.187 "is_configured": false, 00:22:40.187 "data_offset": 2048, 00:22:40.187 "data_size": 63488 00:22:40.187 }, 00:22:40.187 { 00:22:40.187 "name": "BaseBdev3", 00:22:40.187 "uuid": "b73113ee-142e-5cd1-9f71-330a52cfc8a6", 00:22:40.187 "is_configured": true, 00:22:40.187 "data_offset": 2048, 00:22:40.187 "data_size": 63488 00:22:40.187 }, 00:22:40.187 { 00:22:40.187 "name": "BaseBdev4", 00:22:40.187 "uuid": "c422b460-5c8b-512d-8586-8b7b7cd576f6", 00:22:40.187 "is_configured": true, 00:22:40.187 "data_offset": 2048, 00:22:40.187 "data_size": 63488 00:22:40.187 } 00:22:40.187 ] 00:22:40.187 }' 00:22:40.187 07:23:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:22:40.187 07:23:13 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:22:40.187 07:23:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:22:40.446 07:23:13 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:22:40.446 07:23:13 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:40.446 07:23:13 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:22:40.446 07:23:14 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:22:40.446 07:23:14 -- bdev/bdev_raid.sh@709 -- # killprocess 131559 00:22:40.446 07:23:14 -- common/autotest_common.sh@924 -- # '[' -z 131559 ']' 00:22:40.446 07:23:14 -- common/autotest_common.sh@928 -- # kill -0 131559 00:22:40.446 07:23:14 -- common/autotest_common.sh@929 -- # uname 00:22:40.704 07:23:14 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:22:40.704 07:23:14 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 131559 00:22:40.704 killing process with pid 131559 00:22:40.704 Received shutdown signal, test time was about 16.571578 seconds 00:22:40.704 00:22:40.704 Latency(us) 00:22:40.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:40.704 =================================================================================================================== 00:22:40.704 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:40.704 07:23:14 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:22:40.704 07:23:14 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:22:40.704 07:23:14 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 131559' 00:22:40.704 07:23:14 -- common/autotest_common.sh@943 -- # kill 131559 00:22:40.704 07:23:14 -- common/autotest_common.sh@948 -- # wait 131559 00:22:40.704 [2024-02-13 07:23:14.157516] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:40.704 [2024-02-13 07:23:14.157577] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:40.704 [2024-02-13 07:23:14.157688] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:22:40.704 [2024-02-13 07:23:14.157701] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c680 name raid_bdev1, state offline 00:22:40.962 [2024-02-13 07:23:14.439778] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:41.897 ************************************ 00:22:41.897 END TEST raid_rebuild_test_sb_io 00:22:41.897 ************************************ 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@711 -- # return 0 00:22:41.897 00:22:41.897 real 0m22.712s 00:22:41.897 user 0m36.586s 00:22:41.897 sys 0m2.597s 00:22:41.897 07:23:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:41.897 07:23:15 -- common/autotest_common.sh@10 -- # set +x 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@742 -- # '[' y == y ']' 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:22:41.897 07:23:15 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:22:41.897 07:23:15 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:22:41.897 07:23:15 -- common/autotest_common.sh@10 -- # set +x 00:22:41.897 ************************************ 00:22:41.897 START TEST raid5f_state_function_test 00:22:41.897 ************************************ 00:22:41.897 07:23:15 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid5f 3 false 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:41.897 Process raid pid: 132210 00:22:41.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@226 -- # raid_pid=132210 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 132210' 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@228 -- # waitforlisten 132210 /var/tmp/spdk-raid.sock 00:22:41.897 07:23:15 -- common/autotest_common.sh@817 -- # '[' -z 132210 ']' 00:22:41.897 07:23:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:41.897 07:23:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:41.897 07:23:15 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:41.897 07:23:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:41.897 07:23:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:41.897 07:23:15 -- common/autotest_common.sh@10 -- # set +x 00:22:41.897 [2024-02-13 07:23:15.559397] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:22:41.897 [2024-02-13 07:23:15.559808] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:42.155 [2024-02-13 07:23:15.731133] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.413 [2024-02-13 07:23:15.950846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.671 [2024-02-13 07:23:16.126143] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:42.929 07:23:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:42.929 07:23:16 -- common/autotest_common.sh@850 -- # return 0 00:22:42.929 07:23:16 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:43.188 [2024-02-13 07:23:16.643812] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:43.188 [2024-02-13 07:23:16.644047] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:43.188 [2024-02-13 07:23:16.644148] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:43.188 [2024-02-13 07:23:16.644205] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:43.188 [2024-02-13 07:23:16.644305] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:43.188 [2024-02-13 07:23:16.644384] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:43.188 07:23:16 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:43.188 07:23:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:43.188 07:23:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:43.188 07:23:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:43.188 07:23:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:43.188 07:23:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:43.188 07:23:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:43.188 07:23:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:43.188 07:23:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:43.188 07:23:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:43.188 07:23:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:43.188 07:23:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:43.446 07:23:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:43.446 "name": "Existed_Raid", 00:22:43.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.446 "strip_size_kb": 64, 00:22:43.446 "state": "configuring", 00:22:43.446 "raid_level": "raid5f", 00:22:43.446 "superblock": false, 00:22:43.446 "num_base_bdevs": 3, 00:22:43.446 "num_base_bdevs_discovered": 0, 00:22:43.446 "num_base_bdevs_operational": 3, 00:22:43.446 "base_bdevs_list": [ 00:22:43.446 { 00:22:43.446 "name": "BaseBdev1", 00:22:43.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.446 "is_configured": false, 00:22:43.446 "data_offset": 0, 00:22:43.446 "data_size": 0 00:22:43.446 }, 00:22:43.446 { 00:22:43.446 "name": "BaseBdev2", 00:22:43.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.446 "is_configured": false, 00:22:43.446 "data_offset": 0, 00:22:43.446 "data_size": 0 00:22:43.446 }, 00:22:43.446 { 00:22:43.446 "name": "BaseBdev3", 00:22:43.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:43.446 "is_configured": false, 00:22:43.446 "data_offset": 0, 00:22:43.446 "data_size": 0 00:22:43.446 } 00:22:43.446 ] 00:22:43.446 }' 00:22:43.446 07:23:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:43.446 07:23:16 -- common/autotest_common.sh@10 -- # set +x 00:22:44.012 07:23:17 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:44.270 [2024-02-13 07:23:17.711836] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:44.270 [2024-02-13 07:23:17.711977] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:22:44.270 07:23:17 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:44.535 [2024-02-13 07:23:17.975934] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:44.535 [2024-02-13 07:23:17.976128] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:44.535 [2024-02-13 07:23:17.976225] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:44.535 [2024-02-13 07:23:17.976288] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:44.535 [2024-02-13 07:23:17.976373] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:44.535 [2024-02-13 07:23:17.976433] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:44.535 07:23:17 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:44.535 [2024-02-13 07:23:18.196388] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:44.535 BaseBdev1 00:22:44.535 07:23:18 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:44.535 07:23:18 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:22:44.535 07:23:18 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:44.535 07:23:18 -- common/autotest_common.sh@887 -- # local i 00:22:44.535 07:23:18 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:44.535 07:23:18 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:44.535 07:23:18 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:44.800 07:23:18 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:45.057 [ 00:22:45.057 { 00:22:45.057 "name": "BaseBdev1", 00:22:45.057 "aliases": [ 00:22:45.057 "6e229b08-c3fa-4655-a109-b0ba01bb1fdf" 00:22:45.057 ], 00:22:45.057 "product_name": "Malloc disk", 00:22:45.057 "block_size": 512, 00:22:45.057 "num_blocks": 65536, 00:22:45.057 "uuid": "6e229b08-c3fa-4655-a109-b0ba01bb1fdf", 00:22:45.057 "assigned_rate_limits": { 00:22:45.057 "rw_ios_per_sec": 0, 00:22:45.057 "rw_mbytes_per_sec": 0, 00:22:45.057 "r_mbytes_per_sec": 0, 00:22:45.057 "w_mbytes_per_sec": 0 00:22:45.057 }, 00:22:45.057 "claimed": true, 00:22:45.057 "claim_type": "exclusive_write", 00:22:45.057 "zoned": false, 00:22:45.057 "supported_io_types": { 00:22:45.057 "read": true, 00:22:45.057 "write": true, 00:22:45.057 "unmap": true, 00:22:45.057 "write_zeroes": true, 00:22:45.057 "flush": true, 00:22:45.057 "reset": true, 00:22:45.057 "compare": false, 00:22:45.057 "compare_and_write": false, 00:22:45.057 "abort": true, 00:22:45.057 "nvme_admin": false, 00:22:45.057 "nvme_io": false 00:22:45.057 }, 00:22:45.057 "memory_domains": [ 00:22:45.057 { 00:22:45.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:45.057 "dma_device_type": 2 00:22:45.057 } 00:22:45.057 ], 00:22:45.057 "driver_specific": {} 00:22:45.057 } 00:22:45.057 ] 00:22:45.057 07:23:18 -- common/autotest_common.sh@893 -- # return 0 00:22:45.057 07:23:18 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:45.057 07:23:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:45.057 07:23:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:45.057 07:23:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:45.057 07:23:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:45.057 07:23:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:45.057 07:23:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:45.057 07:23:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:45.057 07:23:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:45.057 07:23:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:45.057 07:23:18 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:45.057 07:23:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:45.057 07:23:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:45.057 "name": "Existed_Raid", 00:22:45.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.057 "strip_size_kb": 64, 00:22:45.057 "state": "configuring", 00:22:45.057 "raid_level": "raid5f", 00:22:45.057 "superblock": false, 00:22:45.057 "num_base_bdevs": 3, 00:22:45.057 "num_base_bdevs_discovered": 1, 00:22:45.057 "num_base_bdevs_operational": 3, 00:22:45.057 "base_bdevs_list": [ 00:22:45.057 { 00:22:45.057 "name": "BaseBdev1", 00:22:45.057 "uuid": "6e229b08-c3fa-4655-a109-b0ba01bb1fdf", 00:22:45.057 "is_configured": true, 00:22:45.057 "data_offset": 0, 00:22:45.057 "data_size": 65536 00:22:45.057 }, 00:22:45.057 { 00:22:45.057 "name": "BaseBdev2", 00:22:45.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.057 "is_configured": false, 00:22:45.057 "data_offset": 0, 00:22:45.057 "data_size": 0 00:22:45.057 }, 00:22:45.057 { 00:22:45.057 "name": "BaseBdev3", 00:22:45.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:45.057 "is_configured": false, 00:22:45.057 "data_offset": 0, 00:22:45.057 "data_size": 0 00:22:45.057 } 00:22:45.057 ] 00:22:45.057 }' 00:22:45.057 07:23:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:45.057 07:23:18 -- common/autotest_common.sh@10 -- # set +x 00:22:46.030 07:23:19 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:46.030 [2024-02-13 07:23:19.684657] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:46.030 [2024-02-13 07:23:19.684846] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:22:46.030 07:23:19 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:22:46.030 07:23:19 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:46.288 [2024-02-13 07:23:19.860756] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:46.288 [2024-02-13 07:23:19.862517] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:46.288 [2024-02-13 07:23:19.862695] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:46.288 [2024-02-13 07:23:19.862818] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:46.288 [2024-02-13 07:23:19.862878] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:46.288 07:23:19 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:46.288 07:23:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:46.288 07:23:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:46.288 07:23:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:46.288 07:23:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:46.288 07:23:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:46.288 07:23:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:46.288 07:23:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:46.288 07:23:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:46.288 07:23:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:46.288 07:23:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:46.288 07:23:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:46.288 07:23:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:46.288 07:23:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:46.546 07:23:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:46.546 "name": "Existed_Raid", 00:22:46.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.546 "strip_size_kb": 64, 00:22:46.546 "state": "configuring", 00:22:46.546 "raid_level": "raid5f", 00:22:46.546 "superblock": false, 00:22:46.546 "num_base_bdevs": 3, 00:22:46.546 "num_base_bdevs_discovered": 1, 00:22:46.546 "num_base_bdevs_operational": 3, 00:22:46.546 "base_bdevs_list": [ 00:22:46.546 { 00:22:46.546 "name": "BaseBdev1", 00:22:46.546 "uuid": "6e229b08-c3fa-4655-a109-b0ba01bb1fdf", 00:22:46.546 "is_configured": true, 00:22:46.546 "data_offset": 0, 00:22:46.546 "data_size": 65536 00:22:46.546 }, 00:22:46.546 { 00:22:46.546 "name": "BaseBdev2", 00:22:46.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.546 "is_configured": false, 00:22:46.546 "data_offset": 0, 00:22:46.546 "data_size": 0 00:22:46.546 }, 00:22:46.546 { 00:22:46.546 "name": "BaseBdev3", 00:22:46.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:46.546 "is_configured": false, 00:22:46.546 "data_offset": 0, 00:22:46.546 "data_size": 0 00:22:46.546 } 00:22:46.546 ] 00:22:46.546 }' 00:22:46.546 07:23:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:46.546 07:23:20 -- common/autotest_common.sh@10 -- # set +x 00:22:47.480 07:23:20 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:22:47.480 [2024-02-13 07:23:21.090512] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:22:47.480 BaseBdev2 00:22:47.480 07:23:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:22:47.480 07:23:21 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:22:47.480 07:23:21 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:47.480 07:23:21 -- common/autotest_common.sh@887 -- # local i 00:22:47.480 07:23:21 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:47.480 07:23:21 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:47.480 07:23:21 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:47.738 07:23:21 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:22:47.996 [ 00:22:47.996 { 00:22:47.996 "name": "BaseBdev2", 00:22:47.996 "aliases": [ 00:22:47.996 "0d72072d-2396-4ba6-ae99-fd02d816fdc0" 00:22:47.996 ], 00:22:47.996 "product_name": "Malloc disk", 00:22:47.996 "block_size": 512, 00:22:47.996 "num_blocks": 65536, 00:22:47.996 "uuid": "0d72072d-2396-4ba6-ae99-fd02d816fdc0", 00:22:47.996 "assigned_rate_limits": { 00:22:47.996 "rw_ios_per_sec": 0, 00:22:47.996 "rw_mbytes_per_sec": 0, 00:22:47.996 "r_mbytes_per_sec": 0, 00:22:47.996 "w_mbytes_per_sec": 0 00:22:47.996 }, 00:22:47.996 "claimed": true, 00:22:47.996 "claim_type": "exclusive_write", 00:22:47.996 "zoned": false, 00:22:47.996 "supported_io_types": { 00:22:47.996 "read": true, 00:22:47.996 "write": true, 00:22:47.996 "unmap": true, 00:22:47.996 "write_zeroes": true, 00:22:47.996 "flush": true, 00:22:47.996 "reset": true, 00:22:47.996 "compare": false, 00:22:47.996 "compare_and_write": false, 00:22:47.996 "abort": true, 00:22:47.996 "nvme_admin": false, 00:22:47.996 "nvme_io": false 00:22:47.996 }, 00:22:47.996 "memory_domains": [ 00:22:47.996 { 00:22:47.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:47.996 "dma_device_type": 2 00:22:47.996 } 00:22:47.996 ], 00:22:47.996 "driver_specific": {} 00:22:47.996 } 00:22:47.996 ] 00:22:47.996 07:23:21 -- common/autotest_common.sh@893 -- # return 0 00:22:47.996 07:23:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:47.996 07:23:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:47.996 07:23:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:47.996 07:23:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:47.996 07:23:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:47.996 07:23:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:47.996 07:23:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:47.996 07:23:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:47.996 07:23:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:47.996 07:23:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:47.996 07:23:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:47.996 07:23:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:47.996 07:23:21 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:47.996 07:23:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:48.254 07:23:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:48.254 "name": "Existed_Raid", 00:22:48.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.254 "strip_size_kb": 64, 00:22:48.254 "state": "configuring", 00:22:48.254 "raid_level": "raid5f", 00:22:48.254 "superblock": false, 00:22:48.254 "num_base_bdevs": 3, 00:22:48.254 "num_base_bdevs_discovered": 2, 00:22:48.254 "num_base_bdevs_operational": 3, 00:22:48.254 "base_bdevs_list": [ 00:22:48.254 { 00:22:48.254 "name": "BaseBdev1", 00:22:48.254 "uuid": "6e229b08-c3fa-4655-a109-b0ba01bb1fdf", 00:22:48.254 "is_configured": true, 00:22:48.254 "data_offset": 0, 00:22:48.254 "data_size": 65536 00:22:48.254 }, 00:22:48.254 { 00:22:48.254 "name": "BaseBdev2", 00:22:48.254 "uuid": "0d72072d-2396-4ba6-ae99-fd02d816fdc0", 00:22:48.254 "is_configured": true, 00:22:48.254 "data_offset": 0, 00:22:48.254 "data_size": 65536 00:22:48.254 }, 00:22:48.254 { 00:22:48.254 "name": "BaseBdev3", 00:22:48.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:48.254 "is_configured": false, 00:22:48.254 "data_offset": 0, 00:22:48.254 "data_size": 0 00:22:48.254 } 00:22:48.254 ] 00:22:48.254 }' 00:22:48.254 07:23:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:48.254 07:23:21 -- common/autotest_common.sh@10 -- # set +x 00:22:49.188 07:23:22 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:22:49.188 [2024-02-13 07:23:22.791326] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:22:49.188 [2024-02-13 07:23:22.791600] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:22:49.188 [2024-02-13 07:23:22.791645] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:22:49.188 [2024-02-13 07:23:22.791861] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000055f0 00:22:49.188 [2024-02-13 07:23:22.796399] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:22:49.188 [2024-02-13 07:23:22.796556] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:22:49.188 [2024-02-13 07:23:22.796968] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:22:49.188 BaseBdev3 00:22:49.188 07:23:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:22:49.188 07:23:22 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:22:49.188 07:23:22 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:49.188 07:23:22 -- common/autotest_common.sh@887 -- # local i 00:22:49.188 07:23:22 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:49.188 07:23:22 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:49.188 07:23:22 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:49.446 07:23:22 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:22:49.704 [ 00:22:49.704 { 00:22:49.704 "name": "BaseBdev3", 00:22:49.704 "aliases": [ 00:22:49.704 "1d8c7094-955c-4598-8c48-0deafe36c070" 00:22:49.704 ], 00:22:49.704 "product_name": "Malloc disk", 00:22:49.704 "block_size": 512, 00:22:49.704 "num_blocks": 65536, 00:22:49.704 "uuid": "1d8c7094-955c-4598-8c48-0deafe36c070", 00:22:49.704 "assigned_rate_limits": { 00:22:49.704 "rw_ios_per_sec": 0, 00:22:49.704 "rw_mbytes_per_sec": 0, 00:22:49.704 "r_mbytes_per_sec": 0, 00:22:49.704 "w_mbytes_per_sec": 0 00:22:49.704 }, 00:22:49.704 "claimed": true, 00:22:49.704 "claim_type": "exclusive_write", 00:22:49.704 "zoned": false, 00:22:49.704 "supported_io_types": { 00:22:49.704 "read": true, 00:22:49.704 "write": true, 00:22:49.704 "unmap": true, 00:22:49.704 "write_zeroes": true, 00:22:49.704 "flush": true, 00:22:49.704 "reset": true, 00:22:49.704 "compare": false, 00:22:49.704 "compare_and_write": false, 00:22:49.704 "abort": true, 00:22:49.704 "nvme_admin": false, 00:22:49.704 "nvme_io": false 00:22:49.704 }, 00:22:49.704 "memory_domains": [ 00:22:49.704 { 00:22:49.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:49.705 "dma_device_type": 2 00:22:49.705 } 00:22:49.705 ], 00:22:49.705 "driver_specific": {} 00:22:49.705 } 00:22:49.705 ] 00:22:49.705 07:23:23 -- common/autotest_common.sh@893 -- # return 0 00:22:49.705 07:23:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:22:49.705 07:23:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:49.705 07:23:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:22:49.705 07:23:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:49.705 07:23:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:49.705 07:23:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:49.705 07:23:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:49.705 07:23:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:49.705 07:23:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:49.705 07:23:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:49.705 07:23:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:49.705 07:23:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:49.705 07:23:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:49.705 07:23:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:49.964 07:23:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:49.964 "name": "Existed_Raid", 00:22:49.964 "uuid": "2591d45b-e6b5-4fc8-bb0b-1a0c02a2ffe8", 00:22:49.964 "strip_size_kb": 64, 00:22:49.964 "state": "online", 00:22:49.964 "raid_level": "raid5f", 00:22:49.964 "superblock": false, 00:22:49.964 "num_base_bdevs": 3, 00:22:49.964 "num_base_bdevs_discovered": 3, 00:22:49.964 "num_base_bdevs_operational": 3, 00:22:49.964 "base_bdevs_list": [ 00:22:49.964 { 00:22:49.964 "name": "BaseBdev1", 00:22:49.964 "uuid": "6e229b08-c3fa-4655-a109-b0ba01bb1fdf", 00:22:49.964 "is_configured": true, 00:22:49.964 "data_offset": 0, 00:22:49.964 "data_size": 65536 00:22:49.964 }, 00:22:49.964 { 00:22:49.964 "name": "BaseBdev2", 00:22:49.964 "uuid": "0d72072d-2396-4ba6-ae99-fd02d816fdc0", 00:22:49.964 "is_configured": true, 00:22:49.964 "data_offset": 0, 00:22:49.964 "data_size": 65536 00:22:49.964 }, 00:22:49.964 { 00:22:49.964 "name": "BaseBdev3", 00:22:49.964 "uuid": "1d8c7094-955c-4598-8c48-0deafe36c070", 00:22:49.964 "is_configured": true, 00:22:49.964 "data_offset": 0, 00:22:49.964 "data_size": 65536 00:22:49.964 } 00:22:49.964 ] 00:22:49.964 }' 00:22:49.964 07:23:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:49.964 07:23:23 -- common/autotest_common.sh@10 -- # set +x 00:22:50.530 07:23:24 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:50.788 [2024-02-13 07:23:24.290566] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:22:50.788 07:23:24 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:22:50.788 07:23:24 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:22:50.788 07:23:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:22:50.788 07:23:24 -- bdev/bdev_raid.sh@196 -- # return 0 00:22:50.788 07:23:24 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:22:50.788 07:23:24 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:22:50.788 07:23:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:50.788 07:23:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:22:50.788 07:23:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:50.788 07:23:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:50.788 07:23:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:22:50.788 07:23:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:50.788 07:23:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:50.788 07:23:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:50.788 07:23:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:50.788 07:23:24 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:50.788 07:23:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:51.047 07:23:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:51.047 "name": "Existed_Raid", 00:22:51.047 "uuid": "2591d45b-e6b5-4fc8-bb0b-1a0c02a2ffe8", 00:22:51.047 "strip_size_kb": 64, 00:22:51.047 "state": "online", 00:22:51.047 "raid_level": "raid5f", 00:22:51.047 "superblock": false, 00:22:51.047 "num_base_bdevs": 3, 00:22:51.047 "num_base_bdevs_discovered": 2, 00:22:51.047 "num_base_bdevs_operational": 2, 00:22:51.047 "base_bdevs_list": [ 00:22:51.047 { 00:22:51.047 "name": null, 00:22:51.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:51.047 "is_configured": false, 00:22:51.047 "data_offset": 0, 00:22:51.047 "data_size": 65536 00:22:51.047 }, 00:22:51.047 { 00:22:51.047 "name": "BaseBdev2", 00:22:51.047 "uuid": "0d72072d-2396-4ba6-ae99-fd02d816fdc0", 00:22:51.047 "is_configured": true, 00:22:51.047 "data_offset": 0, 00:22:51.047 "data_size": 65536 00:22:51.047 }, 00:22:51.047 { 00:22:51.047 "name": "BaseBdev3", 00:22:51.047 "uuid": "1d8c7094-955c-4598-8c48-0deafe36c070", 00:22:51.047 "is_configured": true, 00:22:51.047 "data_offset": 0, 00:22:51.047 "data_size": 65536 00:22:51.047 } 00:22:51.047 ] 00:22:51.047 }' 00:22:51.047 07:23:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:51.047 07:23:24 -- common/autotest_common.sh@10 -- # set +x 00:22:51.614 07:23:25 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:22:51.614 07:23:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:51.614 07:23:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:51.614 07:23:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:51.872 07:23:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:51.872 07:23:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:51.872 07:23:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:22:52.131 [2024-02-13 07:23:25.728986] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:22:52.131 [2024-02-13 07:23:25.729181] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:22:52.131 [2024-02-13 07:23:25.729359] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:22:52.131 07:23:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:52.131 07:23:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:52.131 07:23:25 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.131 07:23:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:22:52.389 07:23:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:22:52.389 07:23:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:22:52.389 07:23:25 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:22:52.648 [2024-02-13 07:23:26.157864] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:22:52.648 [2024-02-13 07:23:26.158059] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:22:52.648 07:23:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:22:52.648 07:23:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:22:52.648 07:23:26 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:52.648 07:23:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:22:52.906 07:23:26 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:22:52.906 07:23:26 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:22:52.906 07:23:26 -- bdev/bdev_raid.sh@287 -- # killprocess 132210 00:22:52.906 07:23:26 -- common/autotest_common.sh@924 -- # '[' -z 132210 ']' 00:22:52.906 07:23:26 -- common/autotest_common.sh@928 -- # kill -0 132210 00:22:52.906 07:23:26 -- common/autotest_common.sh@929 -- # uname 00:22:52.906 07:23:26 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:22:52.906 07:23:26 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 132210 00:22:52.906 killing process with pid 132210 00:22:52.906 07:23:26 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:22:52.906 07:23:26 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:22:52.906 07:23:26 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 132210' 00:22:52.906 07:23:26 -- common/autotest_common.sh@943 -- # kill 132210 00:22:52.906 07:23:26 -- common/autotest_common.sh@948 -- # wait 132210 00:22:52.906 [2024-02-13 07:23:26.471120] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:22:52.906 [2024-02-13 07:23:26.471368] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:22:54.281 ************************************ 00:22:54.281 END TEST raid5f_state_function_test 00:22:54.281 ************************************ 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:22:54.281 00:22:54.281 real 0m12.092s 00:22:54.281 user 0m21.399s 00:22:54.281 sys 0m1.403s 00:22:54.281 07:23:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:54.281 07:23:27 -- common/autotest_common.sh@10 -- # set +x 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:22:54.281 07:23:27 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:22:54.281 07:23:27 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:22:54.281 07:23:27 -- common/autotest_common.sh@10 -- # set +x 00:22:54.281 ************************************ 00:22:54.281 START TEST raid5f_state_function_test_sb 00:22:54.281 ************************************ 00:22:54.281 07:23:27 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid5f 3 true 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@226 -- # raid_pid=132606 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:22:54.281 Process raid pid: 132606 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 132606' 00:22:54.281 07:23:27 -- bdev/bdev_raid.sh@228 -- # waitforlisten 132606 /var/tmp/spdk-raid.sock 00:22:54.281 07:23:27 -- common/autotest_common.sh@817 -- # '[' -z 132606 ']' 00:22:54.281 07:23:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:22:54.281 07:23:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:54.281 07:23:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:22:54.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:22:54.281 07:23:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:54.281 07:23:27 -- common/autotest_common.sh@10 -- # set +x 00:22:54.281 [2024-02-13 07:23:27.714232] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:22:54.281 [2024-02-13 07:23:27.714622] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.281 [2024-02-13 07:23:27.886942] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.540 [2024-02-13 07:23:28.112602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.799 [2024-02-13 07:23:28.288299] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:22:55.058 07:23:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:55.058 07:23:28 -- common/autotest_common.sh@850 -- # return 0 00:22:55.058 07:23:28 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:55.316 [2024-02-13 07:23:28.834741] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:55.316 [2024-02-13 07:23:28.835012] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:55.316 [2024-02-13 07:23:28.835122] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:55.316 [2024-02-13 07:23:28.835246] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:55.316 [2024-02-13 07:23:28.835336] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:55.316 [2024-02-13 07:23:28.835415] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:55.316 07:23:28 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:55.316 07:23:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:55.316 07:23:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:55.316 07:23:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:55.316 07:23:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:55.316 07:23:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:55.316 07:23:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:55.316 07:23:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:55.316 07:23:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:55.316 07:23:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:55.316 07:23:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:55.316 07:23:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:55.575 07:23:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:55.575 "name": "Existed_Raid", 00:22:55.575 "uuid": "672507a8-fd95-45a0-95b4-40ad2f45f874", 00:22:55.575 "strip_size_kb": 64, 00:22:55.575 "state": "configuring", 00:22:55.575 "raid_level": "raid5f", 00:22:55.575 "superblock": true, 00:22:55.575 "num_base_bdevs": 3, 00:22:55.575 "num_base_bdevs_discovered": 0, 00:22:55.575 "num_base_bdevs_operational": 3, 00:22:55.575 "base_bdevs_list": [ 00:22:55.575 { 00:22:55.575 "name": "BaseBdev1", 00:22:55.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.575 "is_configured": false, 00:22:55.575 "data_offset": 0, 00:22:55.575 "data_size": 0 00:22:55.575 }, 00:22:55.575 { 00:22:55.575 "name": "BaseBdev2", 00:22:55.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.575 "is_configured": false, 00:22:55.575 "data_offset": 0, 00:22:55.575 "data_size": 0 00:22:55.575 }, 00:22:55.575 { 00:22:55.575 "name": "BaseBdev3", 00:22:55.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:55.575 "is_configured": false, 00:22:55.576 "data_offset": 0, 00:22:55.576 "data_size": 0 00:22:55.576 } 00:22:55.576 ] 00:22:55.576 }' 00:22:55.576 07:23:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:55.576 07:23:29 -- common/autotest_common.sh@10 -- # set +x 00:22:56.143 07:23:29 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:56.401 [2024-02-13 07:23:29.878707] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:56.401 [2024-02-13 07:23:29.878886] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:22:56.401 07:23:29 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:56.401 [2024-02-13 07:23:30.074811] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:22:56.401 [2024-02-13 07:23:30.075017] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:22:56.401 [2024-02-13 07:23:30.075123] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:56.401 [2024-02-13 07:23:30.075186] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:56.401 [2024-02-13 07:23:30.075390] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:56.401 [2024-02-13 07:23:30.075454] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:56.401 07:23:30 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:56.660 [2024-02-13 07:23:30.309819] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:56.660 BaseBdev1 00:22:56.660 07:23:30 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:22:56.660 07:23:30 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:22:56.660 07:23:30 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:56.660 07:23:30 -- common/autotest_common.sh@887 -- # local i 00:22:56.660 07:23:30 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:56.660 07:23:30 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:56.660 07:23:30 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:56.919 07:23:30 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:57.179 [ 00:22:57.179 { 00:22:57.179 "name": "BaseBdev1", 00:22:57.179 "aliases": [ 00:22:57.179 "71593c42-2649-44a5-ba80-ccd8a47b7f0d" 00:22:57.179 ], 00:22:57.179 "product_name": "Malloc disk", 00:22:57.179 "block_size": 512, 00:22:57.179 "num_blocks": 65536, 00:22:57.179 "uuid": "71593c42-2649-44a5-ba80-ccd8a47b7f0d", 00:22:57.179 "assigned_rate_limits": { 00:22:57.179 "rw_ios_per_sec": 0, 00:22:57.179 "rw_mbytes_per_sec": 0, 00:22:57.179 "r_mbytes_per_sec": 0, 00:22:57.179 "w_mbytes_per_sec": 0 00:22:57.179 }, 00:22:57.179 "claimed": true, 00:22:57.179 "claim_type": "exclusive_write", 00:22:57.179 "zoned": false, 00:22:57.179 "supported_io_types": { 00:22:57.179 "read": true, 00:22:57.179 "write": true, 00:22:57.179 "unmap": true, 00:22:57.179 "write_zeroes": true, 00:22:57.179 "flush": true, 00:22:57.179 "reset": true, 00:22:57.179 "compare": false, 00:22:57.179 "compare_and_write": false, 00:22:57.179 "abort": true, 00:22:57.179 "nvme_admin": false, 00:22:57.179 "nvme_io": false 00:22:57.179 }, 00:22:57.179 "memory_domains": [ 00:22:57.179 { 00:22:57.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:57.179 "dma_device_type": 2 00:22:57.179 } 00:22:57.179 ], 00:22:57.179 "driver_specific": {} 00:22:57.179 } 00:22:57.179 ] 00:22:57.179 07:23:30 -- common/autotest_common.sh@893 -- # return 0 00:22:57.179 07:23:30 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:57.179 07:23:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:57.179 07:23:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:57.179 07:23:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:57.179 07:23:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:57.179 07:23:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:57.179 07:23:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:57.179 07:23:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:57.179 07:23:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:57.179 07:23:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:57.179 07:23:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:57.179 07:23:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:57.438 07:23:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:57.438 "name": "Existed_Raid", 00:22:57.438 "uuid": "9c0eb06e-4ae1-4094-8169-498f7ff50810", 00:22:57.438 "strip_size_kb": 64, 00:22:57.438 "state": "configuring", 00:22:57.438 "raid_level": "raid5f", 00:22:57.438 "superblock": true, 00:22:57.438 "num_base_bdevs": 3, 00:22:57.438 "num_base_bdevs_discovered": 1, 00:22:57.438 "num_base_bdevs_operational": 3, 00:22:57.438 "base_bdevs_list": [ 00:22:57.438 { 00:22:57.438 "name": "BaseBdev1", 00:22:57.438 "uuid": "71593c42-2649-44a5-ba80-ccd8a47b7f0d", 00:22:57.438 "is_configured": true, 00:22:57.438 "data_offset": 2048, 00:22:57.438 "data_size": 63488 00:22:57.438 }, 00:22:57.438 { 00:22:57.438 "name": "BaseBdev2", 00:22:57.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.438 "is_configured": false, 00:22:57.438 "data_offset": 0, 00:22:57.438 "data_size": 0 00:22:57.438 }, 00:22:57.438 { 00:22:57.438 "name": "BaseBdev3", 00:22:57.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:57.438 "is_configured": false, 00:22:57.438 "data_offset": 0, 00:22:57.438 "data_size": 0 00:22:57.438 } 00:22:57.438 ] 00:22:57.438 }' 00:22:57.438 07:23:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:57.438 07:23:31 -- common/autotest_common.sh@10 -- # set +x 00:22:58.015 07:23:31 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:22:58.272 [2024-02-13 07:23:31.898128] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:22:58.272 [2024-02-13 07:23:31.898339] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:22:58.272 07:23:31 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:22:58.272 07:23:31 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:22:58.530 07:23:32 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:22:58.788 BaseBdev1 00:22:58.788 07:23:32 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:22:58.788 07:23:32 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:22:58.788 07:23:32 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:58.788 07:23:32 -- common/autotest_common.sh@887 -- # local i 00:22:58.788 07:23:32 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:58.788 07:23:32 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:58.788 07:23:32 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:22:59.045 07:23:32 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:22:59.303 [ 00:22:59.303 { 00:22:59.303 "name": "BaseBdev1", 00:22:59.303 "aliases": [ 00:22:59.303 "7aec88f6-4f83-4b5e-b86a-f32286f6e489" 00:22:59.303 ], 00:22:59.303 "product_name": "Malloc disk", 00:22:59.303 "block_size": 512, 00:22:59.303 "num_blocks": 65536, 00:22:59.303 "uuid": "7aec88f6-4f83-4b5e-b86a-f32286f6e489", 00:22:59.303 "assigned_rate_limits": { 00:22:59.303 "rw_ios_per_sec": 0, 00:22:59.303 "rw_mbytes_per_sec": 0, 00:22:59.303 "r_mbytes_per_sec": 0, 00:22:59.303 "w_mbytes_per_sec": 0 00:22:59.303 }, 00:22:59.303 "claimed": false, 00:22:59.303 "zoned": false, 00:22:59.303 "supported_io_types": { 00:22:59.303 "read": true, 00:22:59.303 "write": true, 00:22:59.303 "unmap": true, 00:22:59.303 "write_zeroes": true, 00:22:59.303 "flush": true, 00:22:59.303 "reset": true, 00:22:59.303 "compare": false, 00:22:59.303 "compare_and_write": false, 00:22:59.303 "abort": true, 00:22:59.303 "nvme_admin": false, 00:22:59.303 "nvme_io": false 00:22:59.303 }, 00:22:59.303 "memory_domains": [ 00:22:59.303 { 00:22:59.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:22:59.303 "dma_device_type": 2 00:22:59.303 } 00:22:59.303 ], 00:22:59.303 "driver_specific": {} 00:22:59.303 } 00:22:59.303 ] 00:22:59.303 07:23:32 -- common/autotest_common.sh@893 -- # return 0 00:22:59.303 07:23:32 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:22:59.303 [2024-02-13 07:23:32.934409] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:22:59.303 [2024-02-13 07:23:32.936478] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:22:59.303 [2024-02-13 07:23:32.936673] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:22:59.303 [2024-02-13 07:23:32.936782] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:22:59.303 [2024-02-13 07:23:32.936859] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:22:59.303 07:23:32 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:22:59.303 07:23:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:22:59.303 07:23:32 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:22:59.303 07:23:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:22:59.303 07:23:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:22:59.303 07:23:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:22:59.303 07:23:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:22:59.303 07:23:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:22:59.303 07:23:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:22:59.303 07:23:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:22:59.303 07:23:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:22:59.303 07:23:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:22:59.303 07:23:32 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:22:59.303 07:23:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:22:59.561 07:23:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:22:59.561 "name": "Existed_Raid", 00:22:59.561 "uuid": "9f5b81d0-4eab-4954-8e30-5e5afd1b2f6e", 00:22:59.561 "strip_size_kb": 64, 00:22:59.561 "state": "configuring", 00:22:59.561 "raid_level": "raid5f", 00:22:59.561 "superblock": true, 00:22:59.561 "num_base_bdevs": 3, 00:22:59.561 "num_base_bdevs_discovered": 1, 00:22:59.561 "num_base_bdevs_operational": 3, 00:22:59.561 "base_bdevs_list": [ 00:22:59.561 { 00:22:59.561 "name": "BaseBdev1", 00:22:59.561 "uuid": "7aec88f6-4f83-4b5e-b86a-f32286f6e489", 00:22:59.561 "is_configured": true, 00:22:59.561 "data_offset": 2048, 00:22:59.561 "data_size": 63488 00:22:59.561 }, 00:22:59.561 { 00:22:59.561 "name": "BaseBdev2", 00:22:59.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.561 "is_configured": false, 00:22:59.561 "data_offset": 0, 00:22:59.561 "data_size": 0 00:22:59.561 }, 00:22:59.561 { 00:22:59.561 "name": "BaseBdev3", 00:22:59.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:22:59.561 "is_configured": false, 00:22:59.561 "data_offset": 0, 00:22:59.561 "data_size": 0 00:22:59.561 } 00:22:59.561 ] 00:22:59.561 }' 00:22:59.561 07:23:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:22:59.561 07:23:33 -- common/autotest_common.sh@10 -- # set +x 00:23:00.496 07:23:33 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:00.496 [2024-02-13 07:23:34.121096] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:00.496 BaseBdev2 00:23:00.496 07:23:34 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:23:00.496 07:23:34 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:23:00.496 07:23:34 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:23:00.496 07:23:34 -- common/autotest_common.sh@887 -- # local i 00:23:00.496 07:23:34 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:23:00.496 07:23:34 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:23:00.496 07:23:34 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:00.754 07:23:34 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:23:01.012 [ 00:23:01.012 { 00:23:01.012 "name": "BaseBdev2", 00:23:01.012 "aliases": [ 00:23:01.012 "81ba2d40-5c3e-442d-bd18-e95b353aaf96" 00:23:01.012 ], 00:23:01.012 "product_name": "Malloc disk", 00:23:01.012 "block_size": 512, 00:23:01.012 "num_blocks": 65536, 00:23:01.012 "uuid": "81ba2d40-5c3e-442d-bd18-e95b353aaf96", 00:23:01.012 "assigned_rate_limits": { 00:23:01.012 "rw_ios_per_sec": 0, 00:23:01.012 "rw_mbytes_per_sec": 0, 00:23:01.012 "r_mbytes_per_sec": 0, 00:23:01.012 "w_mbytes_per_sec": 0 00:23:01.012 }, 00:23:01.012 "claimed": true, 00:23:01.012 "claim_type": "exclusive_write", 00:23:01.012 "zoned": false, 00:23:01.012 "supported_io_types": { 00:23:01.012 "read": true, 00:23:01.012 "write": true, 00:23:01.012 "unmap": true, 00:23:01.012 "write_zeroes": true, 00:23:01.012 "flush": true, 00:23:01.012 "reset": true, 00:23:01.012 "compare": false, 00:23:01.012 "compare_and_write": false, 00:23:01.012 "abort": true, 00:23:01.012 "nvme_admin": false, 00:23:01.012 "nvme_io": false 00:23:01.012 }, 00:23:01.012 "memory_domains": [ 00:23:01.012 { 00:23:01.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:01.012 "dma_device_type": 2 00:23:01.012 } 00:23:01.012 ], 00:23:01.012 "driver_specific": {} 00:23:01.012 } 00:23:01.012 ] 00:23:01.012 07:23:34 -- common/autotest_common.sh@893 -- # return 0 00:23:01.012 07:23:34 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:01.012 07:23:34 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:01.012 07:23:34 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:23:01.012 07:23:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:01.012 07:23:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:01.012 07:23:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:01.012 07:23:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:01.012 07:23:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:01.012 07:23:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:01.012 07:23:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:01.012 07:23:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:01.012 07:23:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:01.013 07:23:34 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:01.013 07:23:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:01.271 07:23:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:01.271 "name": "Existed_Raid", 00:23:01.271 "uuid": "9f5b81d0-4eab-4954-8e30-5e5afd1b2f6e", 00:23:01.271 "strip_size_kb": 64, 00:23:01.271 "state": "configuring", 00:23:01.271 "raid_level": "raid5f", 00:23:01.271 "superblock": true, 00:23:01.271 "num_base_bdevs": 3, 00:23:01.271 "num_base_bdevs_discovered": 2, 00:23:01.271 "num_base_bdevs_operational": 3, 00:23:01.271 "base_bdevs_list": [ 00:23:01.271 { 00:23:01.271 "name": "BaseBdev1", 00:23:01.271 "uuid": "7aec88f6-4f83-4b5e-b86a-f32286f6e489", 00:23:01.271 "is_configured": true, 00:23:01.271 "data_offset": 2048, 00:23:01.271 "data_size": 63488 00:23:01.271 }, 00:23:01.271 { 00:23:01.271 "name": "BaseBdev2", 00:23:01.271 "uuid": "81ba2d40-5c3e-442d-bd18-e95b353aaf96", 00:23:01.271 "is_configured": true, 00:23:01.271 "data_offset": 2048, 00:23:01.271 "data_size": 63488 00:23:01.271 }, 00:23:01.271 { 00:23:01.271 "name": "BaseBdev3", 00:23:01.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:01.271 "is_configured": false, 00:23:01.271 "data_offset": 0, 00:23:01.271 "data_size": 0 00:23:01.271 } 00:23:01.271 ] 00:23:01.271 }' 00:23:01.271 07:23:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:01.271 07:23:34 -- common/autotest_common.sh@10 -- # set +x 00:23:01.838 07:23:35 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:02.096 [2024-02-13 07:23:35.595921] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:02.096 [2024-02-13 07:23:35.596441] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:23:02.096 BaseBdev3 00:23:02.096 [2024-02-13 07:23:35.596931] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:02.096 [2024-02-13 07:23:35.597174] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:23:02.096 07:23:35 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:23:02.096 07:23:35 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:23:02.096 07:23:35 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:23:02.096 07:23:35 -- common/autotest_common.sh@887 -- # local i 00:23:02.096 07:23:35 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:23:02.096 07:23:35 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:23:02.096 07:23:35 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:23:02.096 [2024-02-13 07:23:35.611797] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:23:02.096 [2024-02-13 07:23:35.612008] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:23:02.096 [2024-02-13 07:23:35.612427] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:02.096 07:23:35 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:23:02.355 [ 00:23:02.355 { 00:23:02.355 "name": "BaseBdev3", 00:23:02.355 "aliases": [ 00:23:02.355 "d91bda71-073d-4567-937c-e3eeaa9745c6" 00:23:02.355 ], 00:23:02.355 "product_name": "Malloc disk", 00:23:02.355 "block_size": 512, 00:23:02.355 "num_blocks": 65536, 00:23:02.355 "uuid": "d91bda71-073d-4567-937c-e3eeaa9745c6", 00:23:02.355 "assigned_rate_limits": { 00:23:02.355 "rw_ios_per_sec": 0, 00:23:02.355 "rw_mbytes_per_sec": 0, 00:23:02.355 "r_mbytes_per_sec": 0, 00:23:02.355 "w_mbytes_per_sec": 0 00:23:02.355 }, 00:23:02.355 "claimed": true, 00:23:02.355 "claim_type": "exclusive_write", 00:23:02.355 "zoned": false, 00:23:02.355 "supported_io_types": { 00:23:02.355 "read": true, 00:23:02.355 "write": true, 00:23:02.355 "unmap": true, 00:23:02.355 "write_zeroes": true, 00:23:02.355 "flush": true, 00:23:02.355 "reset": true, 00:23:02.355 "compare": false, 00:23:02.355 "compare_and_write": false, 00:23:02.355 "abort": true, 00:23:02.355 "nvme_admin": false, 00:23:02.355 "nvme_io": false 00:23:02.355 }, 00:23:02.355 "memory_domains": [ 00:23:02.355 { 00:23:02.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:02.355 "dma_device_type": 2 00:23:02.355 } 00:23:02.355 ], 00:23:02.355 "driver_specific": {} 00:23:02.355 } 00:23:02.355 ] 00:23:02.355 07:23:35 -- common/autotest_common.sh@893 -- # return 0 00:23:02.355 07:23:35 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:23:02.355 07:23:35 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:23:02.355 07:23:35 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:23:02.355 07:23:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:02.355 07:23:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:02.355 07:23:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:02.355 07:23:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:02.355 07:23:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:02.355 07:23:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:02.355 07:23:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:02.355 07:23:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:02.355 07:23:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:02.356 07:23:35 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:02.356 07:23:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:02.615 07:23:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:02.615 "name": "Existed_Raid", 00:23:02.615 "uuid": "9f5b81d0-4eab-4954-8e30-5e5afd1b2f6e", 00:23:02.615 "strip_size_kb": 64, 00:23:02.615 "state": "online", 00:23:02.615 "raid_level": "raid5f", 00:23:02.615 "superblock": true, 00:23:02.615 "num_base_bdevs": 3, 00:23:02.615 "num_base_bdevs_discovered": 3, 00:23:02.615 "num_base_bdevs_operational": 3, 00:23:02.615 "base_bdevs_list": [ 00:23:02.615 { 00:23:02.615 "name": "BaseBdev1", 00:23:02.615 "uuid": "7aec88f6-4f83-4b5e-b86a-f32286f6e489", 00:23:02.615 "is_configured": true, 00:23:02.615 "data_offset": 2048, 00:23:02.615 "data_size": 63488 00:23:02.615 }, 00:23:02.615 { 00:23:02.615 "name": "BaseBdev2", 00:23:02.615 "uuid": "81ba2d40-5c3e-442d-bd18-e95b353aaf96", 00:23:02.615 "is_configured": true, 00:23:02.615 "data_offset": 2048, 00:23:02.615 "data_size": 63488 00:23:02.615 }, 00:23:02.615 { 00:23:02.615 "name": "BaseBdev3", 00:23:02.615 "uuid": "d91bda71-073d-4567-937c-e3eeaa9745c6", 00:23:02.615 "is_configured": true, 00:23:02.615 "data_offset": 2048, 00:23:02.615 "data_size": 63488 00:23:02.615 } 00:23:02.615 ] 00:23:02.615 }' 00:23:02.615 07:23:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:02.615 07:23:36 -- common/autotest_common.sh@10 -- # set +x 00:23:03.550 07:23:36 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:23:03.550 [2024-02-13 07:23:37.120073] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:03.550 07:23:37 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:23:03.550 07:23:37 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:23:03.550 07:23:37 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:03.550 07:23:37 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:03.550 07:23:37 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:23:03.550 07:23:37 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:23:03.550 07:23:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:23:03.550 07:23:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:03.550 07:23:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:03.550 07:23:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:03.550 07:23:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:03.550 07:23:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:03.550 07:23:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:03.550 07:23:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:03.550 07:23:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:03.550 07:23:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:03.550 07:23:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:23:03.815 07:23:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:03.815 "name": "Existed_Raid", 00:23:03.815 "uuid": "9f5b81d0-4eab-4954-8e30-5e5afd1b2f6e", 00:23:03.815 "strip_size_kb": 64, 00:23:03.815 "state": "online", 00:23:03.815 "raid_level": "raid5f", 00:23:03.815 "superblock": true, 00:23:03.815 "num_base_bdevs": 3, 00:23:03.815 "num_base_bdevs_discovered": 2, 00:23:03.815 "num_base_bdevs_operational": 2, 00:23:03.815 "base_bdevs_list": [ 00:23:03.815 { 00:23:03.815 "name": null, 00:23:03.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:03.815 "is_configured": false, 00:23:03.815 "data_offset": 2048, 00:23:03.815 "data_size": 63488 00:23:03.815 }, 00:23:03.815 { 00:23:03.815 "name": "BaseBdev2", 00:23:03.815 "uuid": "81ba2d40-5c3e-442d-bd18-e95b353aaf96", 00:23:03.815 "is_configured": true, 00:23:03.815 "data_offset": 2048, 00:23:03.815 "data_size": 63488 00:23:03.815 }, 00:23:03.815 { 00:23:03.815 "name": "BaseBdev3", 00:23:03.815 "uuid": "d91bda71-073d-4567-937c-e3eeaa9745c6", 00:23:03.815 "is_configured": true, 00:23:03.815 "data_offset": 2048, 00:23:03.815 "data_size": 63488 00:23:03.815 } 00:23:03.815 ] 00:23:03.815 }' 00:23:03.815 07:23:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:03.815 07:23:37 -- common/autotest_common.sh@10 -- # set +x 00:23:04.383 07:23:38 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:23:04.383 07:23:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:04.383 07:23:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:04.383 07:23:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:04.951 07:23:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:04.951 07:23:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:04.951 07:23:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:23:04.951 [2024-02-13 07:23:38.560704] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:23:04.951 [2024-02-13 07:23:38.560882] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:04.951 [2024-02-13 07:23:38.561044] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:05.209 07:23:38 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:05.209 07:23:38 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:05.209 07:23:38 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.209 07:23:38 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:23:05.469 07:23:38 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:23:05.469 07:23:38 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:23:05.469 07:23:38 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:23:05.469 [2024-02-13 07:23:39.093910] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:23:05.469 [2024-02-13 07:23:39.094139] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:23:05.727 07:23:39 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:23:05.727 07:23:39 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:23:05.727 07:23:39 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:05.727 07:23:39 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:23:05.727 07:23:39 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:23:05.727 07:23:39 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:23:05.727 07:23:39 -- bdev/bdev_raid.sh@287 -- # killprocess 132606 00:23:05.727 07:23:39 -- common/autotest_common.sh@924 -- # '[' -z 132606 ']' 00:23:05.727 07:23:39 -- common/autotest_common.sh@928 -- # kill -0 132606 00:23:05.727 07:23:39 -- common/autotest_common.sh@929 -- # uname 00:23:05.727 07:23:39 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:23:05.727 07:23:39 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 132606 00:23:05.727 killing process with pid 132606 00:23:05.727 07:23:39 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:23:05.727 07:23:39 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:23:05.727 07:23:39 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 132606' 00:23:05.727 07:23:39 -- common/autotest_common.sh@943 -- # kill 132606 00:23:05.727 07:23:39 -- common/autotest_common.sh@948 -- # wait 132606 00:23:05.727 [2024-02-13 07:23:39.398914] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:05.727 [2024-02-13 07:23:39.399030] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:07.104 ************************************ 00:23:07.104 END TEST raid5f_state_function_test_sb 00:23:07.104 ************************************ 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@289 -- # return 0 00:23:07.104 00:23:07.104 real 0m12.734s 00:23:07.104 user 0m22.630s 00:23:07.104 sys 0m1.420s 00:23:07.104 07:23:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:07.104 07:23:40 -- common/autotest_common.sh@10 -- # set +x 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:23:07.104 07:23:40 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:23:07.104 07:23:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:23:07.104 07:23:40 -- common/autotest_common.sh@10 -- # set +x 00:23:07.104 ************************************ 00:23:07.104 START TEST raid5f_superblock_test 00:23:07.104 ************************************ 00:23:07.104 07:23:40 -- common/autotest_common.sh@1102 -- # raid_superblock_test raid5f 3 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@357 -- # raid_pid=133014 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:23:07.104 07:23:40 -- bdev/bdev_raid.sh@358 -- # waitforlisten 133014 /var/tmp/spdk-raid.sock 00:23:07.104 07:23:40 -- common/autotest_common.sh@817 -- # '[' -z 133014 ']' 00:23:07.104 07:23:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:07.104 07:23:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:07.104 07:23:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:07.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:07.104 07:23:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:07.104 07:23:40 -- common/autotest_common.sh@10 -- # set +x 00:23:07.104 [2024-02-13 07:23:40.491326] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:23:07.104 [2024-02-13 07:23:40.491677] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133014 ] 00:23:07.104 [2024-02-13 07:23:40.652437] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.363 [2024-02-13 07:23:40.906723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.622 [2024-02-13 07:23:41.096139] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:07.881 07:23:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:07.881 07:23:41 -- common/autotest_common.sh@850 -- # return 0 00:23:07.881 07:23:41 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:23:07.881 07:23:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:07.881 07:23:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:23:07.881 07:23:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:23:07.881 07:23:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:23:07.881 07:23:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:07.881 07:23:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:07.881 07:23:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:07.881 07:23:41 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:23:08.140 malloc1 00:23:08.140 07:23:41 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:08.399 [2024-02-13 07:23:41.883924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:08.399 [2024-02-13 07:23:41.884186] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.399 [2024-02-13 07:23:41.884258] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:23:08.399 [2024-02-13 07:23:41.884544] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.399 [2024-02-13 07:23:41.887122] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.399 [2024-02-13 07:23:41.887308] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:08.399 pt1 00:23:08.399 07:23:41 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:08.399 07:23:41 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:08.399 07:23:41 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:23:08.399 07:23:41 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:23:08.399 07:23:41 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:23:08.399 07:23:41 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:08.399 07:23:41 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:08.399 07:23:41 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:08.399 07:23:41 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:23:08.672 malloc2 00:23:08.672 07:23:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:08.672 [2024-02-13 07:23:42.331068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:08.672 [2024-02-13 07:23:42.331322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:08.672 [2024-02-13 07:23:42.331399] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:08.672 [2024-02-13 07:23:42.331712] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:08.672 [2024-02-13 07:23:42.334117] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:08.672 [2024-02-13 07:23:42.334301] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:08.672 pt2 00:23:08.672 07:23:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:08.672 07:23:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:08.672 07:23:42 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:23:08.672 07:23:42 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:23:08.672 07:23:42 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:23:08.672 07:23:42 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:23:08.672 07:23:42 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:23:08.672 07:23:42 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:23:08.673 07:23:42 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:23:08.932 malloc3 00:23:08.932 07:23:42 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:09.191 [2024-02-13 07:23:42.736986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:09.191 [2024-02-13 07:23:42.737243] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:09.191 [2024-02-13 07:23:42.737332] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:09.191 [2024-02-13 07:23:42.737577] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:09.191 [2024-02-13 07:23:42.739839] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:09.191 [2024-02-13 07:23:42.740022] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:09.191 pt3 00:23:09.191 07:23:42 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:23:09.191 07:23:42 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:23:09.191 07:23:42 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:23:09.451 [2024-02-13 07:23:42.937115] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:09.451 [2024-02-13 07:23:42.938948] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:09.451 [2024-02-13 07:23:42.939164] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:09.451 [2024-02-13 07:23:42.939420] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:23:09.451 [2024-02-13 07:23:42.939527] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:09.451 [2024-02-13 07:23:42.939687] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:23:09.451 [2024-02-13 07:23:42.944026] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:23:09.451 [2024-02-13 07:23:42.944178] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:23:09.451 [2024-02-13 07:23:42.944497] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:09.451 07:23:42 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:09.451 07:23:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:09.451 07:23:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:09.451 07:23:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:09.451 07:23:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:09.451 07:23:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:09.451 07:23:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:09.451 07:23:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:09.452 07:23:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:09.452 07:23:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:09.452 07:23:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:09.452 07:23:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:09.711 07:23:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:09.711 "name": "raid_bdev1", 00:23:09.711 "uuid": "d3bd3be3-98ef-46bb-a78d-4224ec52b9a0", 00:23:09.711 "strip_size_kb": 64, 00:23:09.711 "state": "online", 00:23:09.711 "raid_level": "raid5f", 00:23:09.711 "superblock": true, 00:23:09.711 "num_base_bdevs": 3, 00:23:09.711 "num_base_bdevs_discovered": 3, 00:23:09.711 "num_base_bdevs_operational": 3, 00:23:09.711 "base_bdevs_list": [ 00:23:09.711 { 00:23:09.711 "name": "pt1", 00:23:09.711 "uuid": "ef8f0c80-be36-50cc-aa6e-9ba843326b60", 00:23:09.711 "is_configured": true, 00:23:09.711 "data_offset": 2048, 00:23:09.711 "data_size": 63488 00:23:09.711 }, 00:23:09.711 { 00:23:09.711 "name": "pt2", 00:23:09.711 "uuid": "43084f6a-e558-5c24-b00a-f666c354f232", 00:23:09.711 "is_configured": true, 00:23:09.711 "data_offset": 2048, 00:23:09.711 "data_size": 63488 00:23:09.711 }, 00:23:09.711 { 00:23:09.711 "name": "pt3", 00:23:09.711 "uuid": "969c6b41-b7b2-5712-b86a-057bd77caba5", 00:23:09.711 "is_configured": true, 00:23:09.711 "data_offset": 2048, 00:23:09.711 "data_size": 63488 00:23:09.711 } 00:23:09.711 ] 00:23:09.711 }' 00:23:09.711 07:23:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:09.711 07:23:43 -- common/autotest_common.sh@10 -- # set +x 00:23:10.278 07:23:43 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:10.278 07:23:43 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:23:10.536 [2024-02-13 07:23:44.010197] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:10.537 07:23:44 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=d3bd3be3-98ef-46bb-a78d-4224ec52b9a0 00:23:10.537 07:23:44 -- bdev/bdev_raid.sh@380 -- # '[' -z d3bd3be3-98ef-46bb-a78d-4224ec52b9a0 ']' 00:23:10.537 07:23:44 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:10.796 [2024-02-13 07:23:44.254059] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:10.796 [2024-02-13 07:23:44.254218] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:10.796 [2024-02-13 07:23:44.254408] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:10.796 [2024-02-13 07:23:44.254616] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:10.796 [2024-02-13 07:23:44.254717] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:23:10.796 07:23:44 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:10.796 07:23:44 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:23:10.796 07:23:44 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:23:10.796 07:23:44 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:23:10.796 07:23:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:10.796 07:23:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:11.055 07:23:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:11.055 07:23:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:11.314 07:23:44 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:23:11.314 07:23:44 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:11.573 07:23:45 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:23:11.573 07:23:45 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:23:11.832 07:23:45 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:23:11.832 07:23:45 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:11.832 07:23:45 -- common/autotest_common.sh@638 -- # local es=0 00:23:11.832 07:23:45 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:11.832 07:23:45 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:11.832 07:23:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:11.832 07:23:45 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:11.832 07:23:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:11.832 07:23:45 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:11.832 07:23:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:11.832 07:23:45 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:11.832 07:23:45 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:11.832 07:23:45 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:23:11.832 [2024-02-13 07:23:45.518315] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:23:11.832 [2024-02-13 07:23:45.520076] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:23:11.832 [2024-02-13 07:23:45.520259] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:23:11.832 [2024-02-13 07:23:45.520348] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:23:11.832 [2024-02-13 07:23:45.520673] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:23:11.832 [2024-02-13 07:23:45.520844] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:23:11.832 [2024-02-13 07:23:45.521027] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:11.832 [2024-02-13 07:23:45.521084] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state configuring 00:23:11.832 request: 00:23:11.832 { 00:23:11.832 "name": "raid_bdev1", 00:23:11.832 "raid_level": "raid5f", 00:23:11.832 "base_bdevs": [ 00:23:11.832 "malloc1", 00:23:11.832 "malloc2", 00:23:11.832 "malloc3" 00:23:11.832 ], 00:23:11.832 "superblock": false, 00:23:11.832 "strip_size_kb": 64, 00:23:11.832 "method": "bdev_raid_create", 00:23:11.832 "req_id": 1 00:23:11.832 } 00:23:11.832 Got JSON-RPC error response 00:23:11.832 response: 00:23:11.832 { 00:23:11.832 "code": -17, 00:23:11.832 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:23:11.832 } 00:23:12.091 07:23:45 -- common/autotest_common.sh@641 -- # es=1 00:23:12.091 07:23:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:12.091 07:23:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:12.091 07:23:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:12.091 07:23:45 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.091 07:23:45 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:23:12.091 07:23:45 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:23:12.091 07:23:45 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:23:12.091 07:23:45 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:12.350 [2024-02-13 07:23:45.946319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:12.350 [2024-02-13 07:23:45.946560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:12.350 [2024-02-13 07:23:45.946629] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:12.350 [2024-02-13 07:23:45.946750] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:12.350 [2024-02-13 07:23:45.949169] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:12.350 [2024-02-13 07:23:45.949422] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:12.350 [2024-02-13 07:23:45.949651] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:12.350 [2024-02-13 07:23:45.949827] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:12.350 pt1 00:23:12.350 07:23:45 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:12.350 07:23:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:12.350 07:23:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:12.350 07:23:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:12.350 07:23:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:12.350 07:23:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:12.350 07:23:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:12.350 07:23:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:12.350 07:23:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:12.350 07:23:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:12.350 07:23:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:12.350 07:23:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:12.611 07:23:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:12.611 "name": "raid_bdev1", 00:23:12.611 "uuid": "d3bd3be3-98ef-46bb-a78d-4224ec52b9a0", 00:23:12.611 "strip_size_kb": 64, 00:23:12.611 "state": "configuring", 00:23:12.611 "raid_level": "raid5f", 00:23:12.611 "superblock": true, 00:23:12.611 "num_base_bdevs": 3, 00:23:12.611 "num_base_bdevs_discovered": 1, 00:23:12.611 "num_base_bdevs_operational": 3, 00:23:12.611 "base_bdevs_list": [ 00:23:12.611 { 00:23:12.611 "name": "pt1", 00:23:12.611 "uuid": "ef8f0c80-be36-50cc-aa6e-9ba843326b60", 00:23:12.611 "is_configured": true, 00:23:12.611 "data_offset": 2048, 00:23:12.611 "data_size": 63488 00:23:12.611 }, 00:23:12.611 { 00:23:12.611 "name": null, 00:23:12.611 "uuid": "43084f6a-e558-5c24-b00a-f666c354f232", 00:23:12.611 "is_configured": false, 00:23:12.611 "data_offset": 2048, 00:23:12.611 "data_size": 63488 00:23:12.611 }, 00:23:12.611 { 00:23:12.611 "name": null, 00:23:12.611 "uuid": "969c6b41-b7b2-5712-b86a-057bd77caba5", 00:23:12.611 "is_configured": false, 00:23:12.611 "data_offset": 2048, 00:23:12.611 "data_size": 63488 00:23:12.611 } 00:23:12.611 ] 00:23:12.611 }' 00:23:12.611 07:23:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:12.611 07:23:46 -- common/autotest_common.sh@10 -- # set +x 00:23:13.178 07:23:46 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:23:13.178 07:23:46 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:13.437 [2024-02-13 07:23:46.946661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:13.437 [2024-02-13 07:23:46.946938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:13.437 [2024-02-13 07:23:46.947027] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:23:13.437 [2024-02-13 07:23:46.947257] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:13.437 [2024-02-13 07:23:46.947795] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:13.437 [2024-02-13 07:23:46.947977] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:13.437 [2024-02-13 07:23:46.948216] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:13.437 [2024-02-13 07:23:46.948350] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:13.437 pt2 00:23:13.437 07:23:46 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:13.696 [2024-02-13 07:23:47.134727] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:23:13.696 07:23:47 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:13.696 07:23:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:13.696 07:23:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:13.696 07:23:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:13.696 07:23:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:13.696 07:23:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:13.696 07:23:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:13.696 07:23:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:13.696 07:23:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:13.696 07:23:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:13.696 07:23:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:13.696 07:23:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:13.955 07:23:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:13.955 "name": "raid_bdev1", 00:23:13.955 "uuid": "d3bd3be3-98ef-46bb-a78d-4224ec52b9a0", 00:23:13.955 "strip_size_kb": 64, 00:23:13.955 "state": "configuring", 00:23:13.955 "raid_level": "raid5f", 00:23:13.955 "superblock": true, 00:23:13.955 "num_base_bdevs": 3, 00:23:13.955 "num_base_bdevs_discovered": 1, 00:23:13.955 "num_base_bdevs_operational": 3, 00:23:13.955 "base_bdevs_list": [ 00:23:13.955 { 00:23:13.955 "name": "pt1", 00:23:13.955 "uuid": "ef8f0c80-be36-50cc-aa6e-9ba843326b60", 00:23:13.955 "is_configured": true, 00:23:13.955 "data_offset": 2048, 00:23:13.955 "data_size": 63488 00:23:13.955 }, 00:23:13.955 { 00:23:13.955 "name": null, 00:23:13.955 "uuid": "43084f6a-e558-5c24-b00a-f666c354f232", 00:23:13.955 "is_configured": false, 00:23:13.955 "data_offset": 2048, 00:23:13.955 "data_size": 63488 00:23:13.955 }, 00:23:13.955 { 00:23:13.955 "name": null, 00:23:13.955 "uuid": "969c6b41-b7b2-5712-b86a-057bd77caba5", 00:23:13.955 "is_configured": false, 00:23:13.955 "data_offset": 2048, 00:23:13.955 "data_size": 63488 00:23:13.955 } 00:23:13.955 ] 00:23:13.955 }' 00:23:13.955 07:23:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:13.955 07:23:47 -- common/autotest_common.sh@10 -- # set +x 00:23:14.523 07:23:48 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:23:14.523 07:23:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:14.523 07:23:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:14.782 [2024-02-13 07:23:48.242915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:14.782 [2024-02-13 07:23:48.243139] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:14.782 [2024-02-13 07:23:48.243208] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:23:14.782 [2024-02-13 07:23:48.243477] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:14.782 [2024-02-13 07:23:48.244013] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:14.782 [2024-02-13 07:23:48.244208] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:14.782 [2024-02-13 07:23:48.244426] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:14.782 [2024-02-13 07:23:48.244555] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:14.782 pt2 00:23:14.782 07:23:48 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:14.782 07:23:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:14.782 07:23:48 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:15.041 [2024-02-13 07:23:48.499014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:15.041 [2024-02-13 07:23:48.499317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:15.041 [2024-02-13 07:23:48.499409] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:23:15.041 [2024-02-13 07:23:48.499568] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:15.041 [2024-02-13 07:23:48.500113] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:15.041 [2024-02-13 07:23:48.500284] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:15.041 [2024-02-13 07:23:48.500511] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:15.041 [2024-02-13 07:23:48.500666] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:15.041 [2024-02-13 07:23:48.500848] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:23:15.041 [2024-02-13 07:23:48.501010] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:15.041 [2024-02-13 07:23:48.501183] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005c70 00:23:15.041 [2024-02-13 07:23:48.505926] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:23:15.041 [2024-02-13 07:23:48.506086] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:23:15.041 [2024-02-13 07:23:48.506387] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:15.041 pt3 00:23:15.041 07:23:48 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:23:15.041 07:23:48 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:23:15.041 07:23:48 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:15.041 07:23:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:15.041 07:23:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:15.041 07:23:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:15.041 07:23:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:15.041 07:23:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:15.041 07:23:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:15.041 07:23:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:15.041 07:23:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:15.041 07:23:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:15.041 07:23:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:15.041 07:23:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:15.300 07:23:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:15.300 "name": "raid_bdev1", 00:23:15.300 "uuid": "d3bd3be3-98ef-46bb-a78d-4224ec52b9a0", 00:23:15.300 "strip_size_kb": 64, 00:23:15.300 "state": "online", 00:23:15.300 "raid_level": "raid5f", 00:23:15.300 "superblock": true, 00:23:15.300 "num_base_bdevs": 3, 00:23:15.300 "num_base_bdevs_discovered": 3, 00:23:15.300 "num_base_bdevs_operational": 3, 00:23:15.300 "base_bdevs_list": [ 00:23:15.300 { 00:23:15.300 "name": "pt1", 00:23:15.300 "uuid": "ef8f0c80-be36-50cc-aa6e-9ba843326b60", 00:23:15.300 "is_configured": true, 00:23:15.300 "data_offset": 2048, 00:23:15.300 "data_size": 63488 00:23:15.300 }, 00:23:15.300 { 00:23:15.300 "name": "pt2", 00:23:15.300 "uuid": "43084f6a-e558-5c24-b00a-f666c354f232", 00:23:15.300 "is_configured": true, 00:23:15.300 "data_offset": 2048, 00:23:15.300 "data_size": 63488 00:23:15.300 }, 00:23:15.300 { 00:23:15.300 "name": "pt3", 00:23:15.300 "uuid": "969c6b41-b7b2-5712-b86a-057bd77caba5", 00:23:15.300 "is_configured": true, 00:23:15.300 "data_offset": 2048, 00:23:15.300 "data_size": 63488 00:23:15.300 } 00:23:15.300 ] 00:23:15.300 }' 00:23:15.300 07:23:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:15.300 07:23:48 -- common/autotest_common.sh@10 -- # set +x 00:23:15.867 07:23:49 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:15.867 07:23:49 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:23:16.126 [2024-02-13 07:23:49.704469] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:16.126 07:23:49 -- bdev/bdev_raid.sh@430 -- # '[' d3bd3be3-98ef-46bb-a78d-4224ec52b9a0 '!=' d3bd3be3-98ef-46bb-a78d-4224ec52b9a0 ']' 00:23:16.126 07:23:49 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:23:16.126 07:23:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:23:16.126 07:23:49 -- bdev/bdev_raid.sh@196 -- # return 0 00:23:16.126 07:23:49 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:23:16.384 [2024-02-13 07:23:49.940368] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:23:16.384 07:23:49 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:16.384 07:23:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:16.384 07:23:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:16.384 07:23:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:16.384 07:23:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:16.384 07:23:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:16.384 07:23:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:16.384 07:23:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:16.384 07:23:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:16.384 07:23:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:16.384 07:23:49 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:16.384 07:23:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:16.642 07:23:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:16.642 "name": "raid_bdev1", 00:23:16.642 "uuid": "d3bd3be3-98ef-46bb-a78d-4224ec52b9a0", 00:23:16.642 "strip_size_kb": 64, 00:23:16.642 "state": "online", 00:23:16.642 "raid_level": "raid5f", 00:23:16.642 "superblock": true, 00:23:16.642 "num_base_bdevs": 3, 00:23:16.642 "num_base_bdevs_discovered": 2, 00:23:16.642 "num_base_bdevs_operational": 2, 00:23:16.642 "base_bdevs_list": [ 00:23:16.642 { 00:23:16.642 "name": null, 00:23:16.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:16.642 "is_configured": false, 00:23:16.642 "data_offset": 2048, 00:23:16.642 "data_size": 63488 00:23:16.642 }, 00:23:16.642 { 00:23:16.642 "name": "pt2", 00:23:16.642 "uuid": "43084f6a-e558-5c24-b00a-f666c354f232", 00:23:16.642 "is_configured": true, 00:23:16.642 "data_offset": 2048, 00:23:16.642 "data_size": 63488 00:23:16.642 }, 00:23:16.642 { 00:23:16.642 "name": "pt3", 00:23:16.642 "uuid": "969c6b41-b7b2-5712-b86a-057bd77caba5", 00:23:16.642 "is_configured": true, 00:23:16.642 "data_offset": 2048, 00:23:16.642 "data_size": 63488 00:23:16.642 } 00:23:16.642 ] 00:23:16.642 }' 00:23:16.642 07:23:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:16.642 07:23:50 -- common/autotest_common.sh@10 -- # set +x 00:23:17.209 07:23:50 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:17.467 [2024-02-13 07:23:51.028610] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:17.467 [2024-02-13 07:23:51.028802] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:17.467 [2024-02-13 07:23:51.028998] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:17.467 [2024-02-13 07:23:51.029198] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:17.467 [2024-02-13 07:23:51.029313] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:23:17.467 07:23:51 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:17.467 07:23:51 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:23:17.726 07:23:51 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:23:17.726 07:23:51 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:23:17.726 07:23:51 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:23:17.726 07:23:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:17.726 07:23:51 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:17.985 07:23:51 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:17.985 07:23:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:17.985 07:23:51 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:17.985 07:23:51 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:23:17.985 07:23:51 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:23:17.985 07:23:51 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:23:17.985 07:23:51 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:17.985 07:23:51 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:18.244 [2024-02-13 07:23:51.848761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:18.244 [2024-02-13 07:23:51.849047] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:18.244 [2024-02-13 07:23:51.849244] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:23:18.244 [2024-02-13 07:23:51.849393] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:18.244 [2024-02-13 07:23:51.851645] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:18.244 [2024-02-13 07:23:51.851811] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:18.244 [2024-02-13 07:23:51.852059] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:18.244 [2024-02-13 07:23:51.852227] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:18.244 pt2 00:23:18.244 07:23:51 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:18.244 07:23:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:18.244 07:23:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:18.244 07:23:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:18.244 07:23:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:18.244 07:23:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:18.244 07:23:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:18.244 07:23:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:18.244 07:23:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:18.244 07:23:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:18.244 07:23:51 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:18.244 07:23:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:18.503 07:23:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:18.503 "name": "raid_bdev1", 00:23:18.503 "uuid": "d3bd3be3-98ef-46bb-a78d-4224ec52b9a0", 00:23:18.503 "strip_size_kb": 64, 00:23:18.503 "state": "configuring", 00:23:18.503 "raid_level": "raid5f", 00:23:18.503 "superblock": true, 00:23:18.503 "num_base_bdevs": 3, 00:23:18.503 "num_base_bdevs_discovered": 1, 00:23:18.503 "num_base_bdevs_operational": 2, 00:23:18.503 "base_bdevs_list": [ 00:23:18.503 { 00:23:18.503 "name": null, 00:23:18.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:18.503 "is_configured": false, 00:23:18.503 "data_offset": 2048, 00:23:18.503 "data_size": 63488 00:23:18.503 }, 00:23:18.503 { 00:23:18.503 "name": "pt2", 00:23:18.503 "uuid": "43084f6a-e558-5c24-b00a-f666c354f232", 00:23:18.503 "is_configured": true, 00:23:18.503 "data_offset": 2048, 00:23:18.503 "data_size": 63488 00:23:18.503 }, 00:23:18.503 { 00:23:18.503 "name": null, 00:23:18.503 "uuid": "969c6b41-b7b2-5712-b86a-057bd77caba5", 00:23:18.503 "is_configured": false, 00:23:18.503 "data_offset": 2048, 00:23:18.503 "data_size": 63488 00:23:18.503 } 00:23:18.503 ] 00:23:18.503 }' 00:23:18.503 07:23:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:18.503 07:23:52 -- common/autotest_common.sh@10 -- # set +x 00:23:19.067 07:23:52 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:23:19.067 07:23:52 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:23:19.067 07:23:52 -- bdev/bdev_raid.sh@462 -- # i=2 00:23:19.067 07:23:52 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:19.324 [2024-02-13 07:23:52.812997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:19.324 [2024-02-13 07:23:52.813259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:19.324 [2024-02-13 07:23:52.813339] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:23:19.324 [2024-02-13 07:23:52.813582] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:19.324 [2024-02-13 07:23:52.814110] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:19.324 [2024-02-13 07:23:52.814302] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:19.324 [2024-02-13 07:23:52.814536] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:19.324 [2024-02-13 07:23:52.814656] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:19.324 [2024-02-13 07:23:52.814871] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000ae80 00:23:19.324 [2024-02-13 07:23:52.814982] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:19.324 [2024-02-13 07:23:52.815119] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:23:19.324 [2024-02-13 07:23:52.819404] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000ae80 00:23:19.324 [2024-02-13 07:23:52.819531] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000ae80 00:23:19.324 [2024-02-13 07:23:52.819871] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:19.324 pt3 00:23:19.324 07:23:52 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:19.324 07:23:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:19.324 07:23:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:19.324 07:23:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:19.324 07:23:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:19.324 07:23:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:19.324 07:23:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:19.324 07:23:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:19.324 07:23:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:19.324 07:23:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:19.324 07:23:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:19.324 07:23:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:19.583 07:23:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:19.583 "name": "raid_bdev1", 00:23:19.583 "uuid": "d3bd3be3-98ef-46bb-a78d-4224ec52b9a0", 00:23:19.583 "strip_size_kb": 64, 00:23:19.583 "state": "online", 00:23:19.583 "raid_level": "raid5f", 00:23:19.583 "superblock": true, 00:23:19.583 "num_base_bdevs": 3, 00:23:19.583 "num_base_bdevs_discovered": 2, 00:23:19.583 "num_base_bdevs_operational": 2, 00:23:19.583 "base_bdevs_list": [ 00:23:19.583 { 00:23:19.583 "name": null, 00:23:19.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:19.583 "is_configured": false, 00:23:19.583 "data_offset": 2048, 00:23:19.583 "data_size": 63488 00:23:19.583 }, 00:23:19.583 { 00:23:19.583 "name": "pt2", 00:23:19.583 "uuid": "43084f6a-e558-5c24-b00a-f666c354f232", 00:23:19.583 "is_configured": true, 00:23:19.583 "data_offset": 2048, 00:23:19.583 "data_size": 63488 00:23:19.583 }, 00:23:19.583 { 00:23:19.583 "name": "pt3", 00:23:19.583 "uuid": "969c6b41-b7b2-5712-b86a-057bd77caba5", 00:23:19.583 "is_configured": true, 00:23:19.583 "data_offset": 2048, 00:23:19.583 "data_size": 63488 00:23:19.583 } 00:23:19.583 ] 00:23:19.583 }' 00:23:19.583 07:23:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:19.583 07:23:53 -- common/autotest_common.sh@10 -- # set +x 00:23:20.149 07:23:53 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:23:20.150 07:23:53 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:20.407 [2024-02-13 07:23:53.937515] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:20.407 [2024-02-13 07:23:53.937686] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:20.407 [2024-02-13 07:23:53.937899] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:20.407 [2024-02-13 07:23:53.938125] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:20.407 [2024-02-13 07:23:53.938264] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ae80 name raid_bdev1, state offline 00:23:20.407 07:23:53 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.407 07:23:53 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:23:20.666 07:23:54 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:23:20.666 07:23:54 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:23:20.666 07:23:54 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:23:20.924 [2024-02-13 07:23:54.373615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:23:20.924 [2024-02-13 07:23:54.373823] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:20.924 [2024-02-13 07:23:54.373895] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:23:20.924 [2024-02-13 07:23:54.374014] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:20.924 [2024-02-13 07:23:54.376340] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:20.924 [2024-02-13 07:23:54.376517] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:23:20.924 [2024-02-13 07:23:54.376741] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:23:20.924 [2024-02-13 07:23:54.376893] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:23:20.924 pt1 00:23:20.924 07:23:54 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:23:20.924 07:23:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:20.924 07:23:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:20.924 07:23:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:20.924 07:23:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:20.924 07:23:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:20.924 07:23:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:20.924 07:23:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:20.924 07:23:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:20.924 07:23:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:20.924 07:23:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:20.924 07:23:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:21.181 07:23:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:21.181 "name": "raid_bdev1", 00:23:21.181 "uuid": "d3bd3be3-98ef-46bb-a78d-4224ec52b9a0", 00:23:21.181 "strip_size_kb": 64, 00:23:21.181 "state": "configuring", 00:23:21.181 "raid_level": "raid5f", 00:23:21.181 "superblock": true, 00:23:21.181 "num_base_bdevs": 3, 00:23:21.181 "num_base_bdevs_discovered": 1, 00:23:21.181 "num_base_bdevs_operational": 3, 00:23:21.181 "base_bdevs_list": [ 00:23:21.181 { 00:23:21.181 "name": "pt1", 00:23:21.181 "uuid": "ef8f0c80-be36-50cc-aa6e-9ba843326b60", 00:23:21.181 "is_configured": true, 00:23:21.181 "data_offset": 2048, 00:23:21.181 "data_size": 63488 00:23:21.181 }, 00:23:21.181 { 00:23:21.181 "name": null, 00:23:21.181 "uuid": "43084f6a-e558-5c24-b00a-f666c354f232", 00:23:21.181 "is_configured": false, 00:23:21.182 "data_offset": 2048, 00:23:21.182 "data_size": 63488 00:23:21.182 }, 00:23:21.182 { 00:23:21.182 "name": null, 00:23:21.182 "uuid": "969c6b41-b7b2-5712-b86a-057bd77caba5", 00:23:21.182 "is_configured": false, 00:23:21.182 "data_offset": 2048, 00:23:21.182 "data_size": 63488 00:23:21.182 } 00:23:21.182 ] 00:23:21.182 }' 00:23:21.182 07:23:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:21.182 07:23:54 -- common/autotest_common.sh@10 -- # set +x 00:23:21.748 07:23:55 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:23:21.748 07:23:55 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:21.748 07:23:55 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:23:22.007 07:23:55 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:22.007 07:23:55 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:22.007 07:23:55 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:23:22.265 07:23:55 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:23:22.265 07:23:55 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:23:22.265 07:23:55 -- bdev/bdev_raid.sh@489 -- # i=2 00:23:22.265 07:23:55 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:23:22.265 [2024-02-13 07:23:55.909961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:23:22.265 [2024-02-13 07:23:55.910273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:22.265 [2024-02-13 07:23:55.910347] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:23:22.265 [2024-02-13 07:23:55.910550] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:22.265 [2024-02-13 07:23:55.911195] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:22.266 [2024-02-13 07:23:55.911361] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:23:22.266 [2024-02-13 07:23:55.911568] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:23:22.266 [2024-02-13 07:23:55.911696] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:23:22.266 [2024-02-13 07:23:55.911835] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:22.266 [2024-02-13 07:23:55.911895] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000ba80 name raid_bdev1, state configuring 00:23:22.266 [2024-02-13 07:23:55.912002] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:23:22.266 pt3 00:23:22.266 07:23:55 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:23:22.266 07:23:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:22.266 07:23:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:23:22.266 07:23:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:22.266 07:23:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:22.266 07:23:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:22.266 07:23:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:22.266 07:23:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:22.266 07:23:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:22.266 07:23:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:22.266 07:23:55 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:22.266 07:23:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:22.524 07:23:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:22.524 "name": "raid_bdev1", 00:23:22.524 "uuid": "d3bd3be3-98ef-46bb-a78d-4224ec52b9a0", 00:23:22.524 "strip_size_kb": 64, 00:23:22.524 "state": "configuring", 00:23:22.524 "raid_level": "raid5f", 00:23:22.524 "superblock": true, 00:23:22.524 "num_base_bdevs": 3, 00:23:22.524 "num_base_bdevs_discovered": 1, 00:23:22.524 "num_base_bdevs_operational": 2, 00:23:22.524 "base_bdevs_list": [ 00:23:22.524 { 00:23:22.524 "name": null, 00:23:22.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:22.524 "is_configured": false, 00:23:22.524 "data_offset": 2048, 00:23:22.524 "data_size": 63488 00:23:22.524 }, 00:23:22.524 { 00:23:22.524 "name": null, 00:23:22.524 "uuid": "43084f6a-e558-5c24-b00a-f666c354f232", 00:23:22.524 "is_configured": false, 00:23:22.524 "data_offset": 2048, 00:23:22.524 "data_size": 63488 00:23:22.524 }, 00:23:22.524 { 00:23:22.524 "name": "pt3", 00:23:22.524 "uuid": "969c6b41-b7b2-5712-b86a-057bd77caba5", 00:23:22.524 "is_configured": true, 00:23:22.524 "data_offset": 2048, 00:23:22.524 "data_size": 63488 00:23:22.524 } 00:23:22.524 ] 00:23:22.524 }' 00:23:22.524 07:23:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:22.524 07:23:56 -- common/autotest_common.sh@10 -- # set +x 00:23:23.459 07:23:56 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:23:23.459 07:23:56 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:23.459 07:23:56 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:23:23.459 [2024-02-13 07:23:56.958197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:23:23.459 [2024-02-13 07:23:56.958470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.459 [2024-02-13 07:23:56.958537] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:23:23.459 [2024-02-13 07:23:56.958751] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.459 [2024-02-13 07:23:56.959311] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.459 [2024-02-13 07:23:56.959470] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:23:23.459 [2024-02-13 07:23:56.959672] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:23:23.459 [2024-02-13 07:23:56.959819] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:23:23.460 [2024-02-13 07:23:56.959981] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c080 00:23:23.460 [2024-02-13 07:23:56.960078] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:23.460 [2024-02-13 07:23:56.960207] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:23:23.460 [2024-02-13 07:23:56.964526] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c080 00:23:23.460 [2024-02-13 07:23:56.964670] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c080 00:23:23.460 [2024-02-13 07:23:56.964976] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:23.460 pt2 00:23:23.460 07:23:56 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:23:23.460 07:23:56 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:23:23.460 07:23:56 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:23.460 07:23:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:23.460 07:23:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:23.460 07:23:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:23.460 07:23:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:23.460 07:23:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:23.460 07:23:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:23.460 07:23:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:23.460 07:23:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:23.460 07:23:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:23.460 07:23:56 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:23.460 07:23:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:23.719 07:23:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:23.719 "name": "raid_bdev1", 00:23:23.719 "uuid": "d3bd3be3-98ef-46bb-a78d-4224ec52b9a0", 00:23:23.719 "strip_size_kb": 64, 00:23:23.719 "state": "online", 00:23:23.719 "raid_level": "raid5f", 00:23:23.719 "superblock": true, 00:23:23.719 "num_base_bdevs": 3, 00:23:23.719 "num_base_bdevs_discovered": 2, 00:23:23.719 "num_base_bdevs_operational": 2, 00:23:23.719 "base_bdevs_list": [ 00:23:23.719 { 00:23:23.719 "name": null, 00:23:23.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:23.719 "is_configured": false, 00:23:23.719 "data_offset": 2048, 00:23:23.719 "data_size": 63488 00:23:23.719 }, 00:23:23.719 { 00:23:23.719 "name": "pt2", 00:23:23.719 "uuid": "43084f6a-e558-5c24-b00a-f666c354f232", 00:23:23.719 "is_configured": true, 00:23:23.719 "data_offset": 2048, 00:23:23.719 "data_size": 63488 00:23:23.719 }, 00:23:23.719 { 00:23:23.719 "name": "pt3", 00:23:23.719 "uuid": "969c6b41-b7b2-5712-b86a-057bd77caba5", 00:23:23.719 "is_configured": true, 00:23:23.719 "data_offset": 2048, 00:23:23.719 "data_size": 63488 00:23:23.719 } 00:23:23.719 ] 00:23:23.719 }' 00:23:23.719 07:23:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:23.719 07:23:57 -- common/autotest_common.sh@10 -- # set +x 00:23:24.295 07:23:57 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:24.296 07:23:57 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:23:24.577 [2024-02-13 07:23:58.026864] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:24.577 07:23:58 -- bdev/bdev_raid.sh@506 -- # '[' d3bd3be3-98ef-46bb-a78d-4224ec52b9a0 '!=' d3bd3be3-98ef-46bb-a78d-4224ec52b9a0 ']' 00:23:24.577 07:23:58 -- bdev/bdev_raid.sh@511 -- # killprocess 133014 00:23:24.577 07:23:58 -- common/autotest_common.sh@924 -- # '[' -z 133014 ']' 00:23:24.577 07:23:58 -- common/autotest_common.sh@928 -- # kill -0 133014 00:23:24.577 07:23:58 -- common/autotest_common.sh@929 -- # uname 00:23:24.577 07:23:58 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:23:24.577 07:23:58 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 133014 00:23:24.577 killing process with pid 133014 00:23:24.577 07:23:58 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:23:24.577 07:23:58 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:23:24.577 07:23:58 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 133014' 00:23:24.577 07:23:58 -- common/autotest_common.sh@943 -- # kill 133014 00:23:24.577 07:23:58 -- common/autotest_common.sh@948 -- # wait 133014 00:23:24.577 [2024-02-13 07:23:58.060414] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:24.577 [2024-02-13 07:23:58.060484] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:24.577 [2024-02-13 07:23:58.060593] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:24.577 [2024-02-13 07:23:58.060637] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c080 name raid_bdev1, state offline 00:23:24.836 [2024-02-13 07:23:58.283814] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:25.773 ************************************ 00:23:25.773 END TEST raid5f_superblock_test 00:23:25.773 ************************************ 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@513 -- # return 0 00:23:25.773 00:23:25.773 real 0m18.866s 00:23:25.773 user 0m34.638s 00:23:25.773 sys 0m2.233s 00:23:25.773 07:23:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:25.773 07:23:59 -- common/autotest_common.sh@10 -- # set +x 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false 00:23:25.773 07:23:59 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:23:25.773 07:23:59 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:23:25.773 07:23:59 -- common/autotest_common.sh@10 -- # set +x 00:23:25.773 ************************************ 00:23:25.773 START TEST raid5f_rebuild_test 00:23:25.773 ************************************ 00:23:25.773 07:23:59 -- common/autotest_common.sh@1102 -- # raid_rebuild_test raid5f 3 false false 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:25.773 07:23:59 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:25.774 07:23:59 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:25.774 07:23:59 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:25.774 07:23:59 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:23:25.774 07:23:59 -- bdev/bdev_raid.sh@544 -- # raid_pid=133653 00:23:25.774 07:23:59 -- bdev/bdev_raid.sh@545 -- # waitforlisten 133653 /var/tmp/spdk-raid.sock 00:23:25.774 07:23:59 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:25.774 07:23:59 -- common/autotest_common.sh@817 -- # '[' -z 133653 ']' 00:23:25.774 07:23:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:25.774 07:23:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:25.774 07:23:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:25.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:25.774 07:23:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:25.774 07:23:59 -- common/autotest_common.sh@10 -- # set +x 00:23:25.774 [2024-02-13 07:23:59.429637] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:23:25.774 [2024-02-13 07:23:59.430015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133653 ] 00:23:25.774 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:25.774 Zero copy mechanism will not be used. 00:23:26.032 [2024-02-13 07:23:59.589721] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.291 [2024-02-13 07:23:59.769471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.291 [2024-02-13 07:23:59.945202] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:26.858 07:24:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:26.858 07:24:00 -- common/autotest_common.sh@850 -- # return 0 00:23:26.858 07:24:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:26.858 07:24:00 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:26.858 07:24:00 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:23:27.117 BaseBdev1 00:23:27.117 07:24:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:27.117 07:24:00 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:27.117 07:24:00 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:23:27.376 BaseBdev2 00:23:27.376 07:24:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:27.376 07:24:00 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:23:27.376 07:24:00 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:23:27.634 BaseBdev3 00:23:27.634 07:24:01 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:27.893 spare_malloc 00:23:27.893 07:24:01 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:28.152 spare_delay 00:23:28.152 07:24:01 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:28.410 [2024-02-13 07:24:01.872122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:28.410 [2024-02-13 07:24:01.872389] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:28.410 [2024-02-13 07:24:01.872457] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:28.410 [2024-02-13 07:24:01.872597] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:28.410 [2024-02-13 07:24:01.875091] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:28.410 [2024-02-13 07:24:01.875264] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:28.410 spare 00:23:28.410 07:24:01 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:23:28.410 [2024-02-13 07:24:02.100277] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:28.410 [2024-02-13 07:24:02.102442] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:28.410 [2024-02-13 07:24:02.102666] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:28.410 [2024-02-13 07:24:02.102794] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000008a80 00:23:28.410 [2024-02-13 07:24:02.103004] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:23:28.410 [2024-02-13 07:24:02.103287] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:23:28.670 [2024-02-13 07:24:02.108214] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000008a80 00:23:28.670 [2024-02-13 07:24:02.108344] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000008a80 00:23:28.670 [2024-02-13 07:24:02.108665] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:28.670 07:24:02 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:28.670 07:24:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:28.670 07:24:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:28.670 07:24:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:28.670 07:24:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:28.670 07:24:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:28.670 07:24:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:28.670 07:24:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:28.670 07:24:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:28.670 07:24:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:28.670 07:24:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:28.670 07:24:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:28.931 07:24:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:28.931 "name": "raid_bdev1", 00:23:28.931 "uuid": "64a1d89c-d12d-4123-bade-6f4076e7e404", 00:23:28.931 "strip_size_kb": 64, 00:23:28.931 "state": "online", 00:23:28.931 "raid_level": "raid5f", 00:23:28.931 "superblock": false, 00:23:28.931 "num_base_bdevs": 3, 00:23:28.931 "num_base_bdevs_discovered": 3, 00:23:28.931 "num_base_bdevs_operational": 3, 00:23:28.931 "base_bdevs_list": [ 00:23:28.931 { 00:23:28.931 "name": "BaseBdev1", 00:23:28.931 "uuid": "6c706c7e-9492-4d61-b9a3-a3215fa85799", 00:23:28.931 "is_configured": true, 00:23:28.931 "data_offset": 0, 00:23:28.931 "data_size": 65536 00:23:28.931 }, 00:23:28.931 { 00:23:28.931 "name": "BaseBdev2", 00:23:28.931 "uuid": "7cfaed8d-ac7e-4a38-8d6d-df232cda8e87", 00:23:28.931 "is_configured": true, 00:23:28.931 "data_offset": 0, 00:23:28.931 "data_size": 65536 00:23:28.931 }, 00:23:28.931 { 00:23:28.931 "name": "BaseBdev3", 00:23:28.931 "uuid": "2034eb7c-5cef-45db-9e6e-79f53e1647e2", 00:23:28.931 "is_configured": true, 00:23:28.931 "data_offset": 0, 00:23:28.931 "data_size": 65536 00:23:28.931 } 00:23:28.931 ] 00:23:28.931 }' 00:23:28.931 07:24:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:28.931 07:24:02 -- common/autotest_common.sh@10 -- # set +x 00:23:29.498 07:24:02 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:29.498 07:24:02 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:29.756 [2024-02-13 07:24:03.246516] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:29.756 07:24:03 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=131072 00:23:29.757 07:24:03 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:29.757 07:24:03 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:30.015 07:24:03 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:23:30.015 07:24:03 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:30.015 07:24:03 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:30.015 07:24:03 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:30.015 07:24:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:30.015 07:24:03 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:30.015 07:24:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:30.015 07:24:03 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:30.015 07:24:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:30.015 07:24:03 -- bdev/nbd_common.sh@12 -- # local i 00:23:30.015 07:24:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:30.015 07:24:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:30.015 07:24:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:30.274 [2024-02-13 07:24:03.742642] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005930 00:23:30.274 /dev/nbd0 00:23:30.274 07:24:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:30.274 07:24:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:30.274 07:24:03 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:23:30.274 07:24:03 -- common/autotest_common.sh@855 -- # local i 00:23:30.274 07:24:03 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:30.274 07:24:03 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:30.274 07:24:03 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:23:30.274 07:24:03 -- common/autotest_common.sh@859 -- # break 00:23:30.274 07:24:03 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:30.274 07:24:03 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:30.274 07:24:03 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:30.274 1+0 records in 00:23:30.274 1+0 records out 00:23:30.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353143 s, 11.6 MB/s 00:23:30.274 07:24:03 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:30.274 07:24:03 -- common/autotest_common.sh@872 -- # size=4096 00:23:30.274 07:24:03 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:30.274 07:24:03 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:30.274 07:24:03 -- common/autotest_common.sh@875 -- # return 0 00:23:30.274 07:24:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:30.274 07:24:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:30.274 07:24:03 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:30.274 07:24:03 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:23:30.274 07:24:03 -- bdev/bdev_raid.sh@582 -- # echo 128 00:23:30.274 07:24:03 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:23:30.533 512+0 records in 00:23:30.533 512+0 records out 00:23:30.533 67108864 bytes (67 MB, 64 MiB) copied, 0.335196 s, 200 MB/s 00:23:30.533 07:24:04 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:30.533 07:24:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:30.533 07:24:04 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:30.533 07:24:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:30.533 07:24:04 -- bdev/nbd_common.sh@51 -- # local i 00:23:30.533 07:24:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:30.533 07:24:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:30.791 07:24:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:30.791 07:24:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:30.791 07:24:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:30.791 07:24:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:30.791 07:24:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:30.791 07:24:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:30.791 [2024-02-13 07:24:04.399873] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:30.791 07:24:04 -- bdev/nbd_common.sh@41 -- # break 00:23:30.791 07:24:04 -- bdev/nbd_common.sh@45 -- # return 0 00:23:30.791 07:24:04 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:31.050 [2024-02-13 07:24:04.625878] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:31.050 07:24:04 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:31.050 07:24:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:31.050 07:24:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:31.050 07:24:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:31.050 07:24:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:31.050 07:24:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:31.050 07:24:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:31.050 07:24:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:31.050 07:24:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:31.050 07:24:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:31.050 07:24:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:31.050 07:24:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:31.309 07:24:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:31.309 "name": "raid_bdev1", 00:23:31.309 "uuid": "64a1d89c-d12d-4123-bade-6f4076e7e404", 00:23:31.309 "strip_size_kb": 64, 00:23:31.309 "state": "online", 00:23:31.309 "raid_level": "raid5f", 00:23:31.309 "superblock": false, 00:23:31.309 "num_base_bdevs": 3, 00:23:31.309 "num_base_bdevs_discovered": 2, 00:23:31.309 "num_base_bdevs_operational": 2, 00:23:31.309 "base_bdevs_list": [ 00:23:31.309 { 00:23:31.309 "name": null, 00:23:31.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:31.309 "is_configured": false, 00:23:31.309 "data_offset": 0, 00:23:31.309 "data_size": 65536 00:23:31.309 }, 00:23:31.309 { 00:23:31.309 "name": "BaseBdev2", 00:23:31.309 "uuid": "7cfaed8d-ac7e-4a38-8d6d-df232cda8e87", 00:23:31.309 "is_configured": true, 00:23:31.309 "data_offset": 0, 00:23:31.309 "data_size": 65536 00:23:31.309 }, 00:23:31.309 { 00:23:31.309 "name": "BaseBdev3", 00:23:31.309 "uuid": "2034eb7c-5cef-45db-9e6e-79f53e1647e2", 00:23:31.309 "is_configured": true, 00:23:31.309 "data_offset": 0, 00:23:31.309 "data_size": 65536 00:23:31.309 } 00:23:31.309 ] 00:23:31.309 }' 00:23:31.309 07:24:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:31.309 07:24:04 -- common/autotest_common.sh@10 -- # set +x 00:23:31.876 07:24:05 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:32.134 [2024-02-13 07:24:05.582028] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:32.134 [2024-02-13 07:24:05.582216] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:32.134 [2024-02-13 07:24:05.593826] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002cfb0 00:23:32.134 [2024-02-13 07:24:05.599727] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:32.134 07:24:05 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:33.070 07:24:06 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:33.070 07:24:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:33.070 07:24:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:33.070 07:24:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:33.070 07:24:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:33.070 07:24:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.070 07:24:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.329 07:24:06 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:33.329 "name": "raid_bdev1", 00:23:33.329 "uuid": "64a1d89c-d12d-4123-bade-6f4076e7e404", 00:23:33.329 "strip_size_kb": 64, 00:23:33.329 "state": "online", 00:23:33.329 "raid_level": "raid5f", 00:23:33.329 "superblock": false, 00:23:33.329 "num_base_bdevs": 3, 00:23:33.329 "num_base_bdevs_discovered": 3, 00:23:33.329 "num_base_bdevs_operational": 3, 00:23:33.329 "process": { 00:23:33.329 "type": "rebuild", 00:23:33.329 "target": "spare", 00:23:33.329 "progress": { 00:23:33.329 "blocks": 24576, 00:23:33.329 "percent": 18 00:23:33.329 } 00:23:33.329 }, 00:23:33.329 "base_bdevs_list": [ 00:23:33.329 { 00:23:33.329 "name": "spare", 00:23:33.329 "uuid": "14a23442-e53f-5573-a05b-481df067d10b", 00:23:33.329 "is_configured": true, 00:23:33.329 "data_offset": 0, 00:23:33.329 "data_size": 65536 00:23:33.329 }, 00:23:33.329 { 00:23:33.329 "name": "BaseBdev2", 00:23:33.329 "uuid": "7cfaed8d-ac7e-4a38-8d6d-df232cda8e87", 00:23:33.329 "is_configured": true, 00:23:33.329 "data_offset": 0, 00:23:33.329 "data_size": 65536 00:23:33.329 }, 00:23:33.329 { 00:23:33.329 "name": "BaseBdev3", 00:23:33.329 "uuid": "2034eb7c-5cef-45db-9e6e-79f53e1647e2", 00:23:33.329 "is_configured": true, 00:23:33.329 "data_offset": 0, 00:23:33.329 "data_size": 65536 00:23:33.329 } 00:23:33.329 ] 00:23:33.329 }' 00:23:33.329 07:24:06 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:33.329 07:24:06 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:33.329 07:24:06 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:33.329 07:24:06 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:33.329 07:24:06 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:33.588 [2024-02-13 07:24:07.173596] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:33.588 [2024-02-13 07:24:07.213596] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:33.588 [2024-02-13 07:24:07.213801] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:33.588 07:24:07 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:33.588 07:24:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:33.588 07:24:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:33.588 07:24:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:33.588 07:24:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:33.588 07:24:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:33.588 07:24:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:33.588 07:24:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:33.588 07:24:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:33.588 07:24:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:33.588 07:24:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:33.588 07:24:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:33.847 07:24:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:33.847 "name": "raid_bdev1", 00:23:33.847 "uuid": "64a1d89c-d12d-4123-bade-6f4076e7e404", 00:23:33.847 "strip_size_kb": 64, 00:23:33.847 "state": "online", 00:23:33.847 "raid_level": "raid5f", 00:23:33.847 "superblock": false, 00:23:33.847 "num_base_bdevs": 3, 00:23:33.847 "num_base_bdevs_discovered": 2, 00:23:33.847 "num_base_bdevs_operational": 2, 00:23:33.847 "base_bdevs_list": [ 00:23:33.847 { 00:23:33.847 "name": null, 00:23:33.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:33.847 "is_configured": false, 00:23:33.847 "data_offset": 0, 00:23:33.847 "data_size": 65536 00:23:33.847 }, 00:23:33.847 { 00:23:33.847 "name": "BaseBdev2", 00:23:33.847 "uuid": "7cfaed8d-ac7e-4a38-8d6d-df232cda8e87", 00:23:33.847 "is_configured": true, 00:23:33.847 "data_offset": 0, 00:23:33.847 "data_size": 65536 00:23:33.847 }, 00:23:33.847 { 00:23:33.847 "name": "BaseBdev3", 00:23:33.847 "uuid": "2034eb7c-5cef-45db-9e6e-79f53e1647e2", 00:23:33.847 "is_configured": true, 00:23:33.847 "data_offset": 0, 00:23:33.847 "data_size": 65536 00:23:33.847 } 00:23:33.847 ] 00:23:33.847 }' 00:23:33.847 07:24:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:33.847 07:24:07 -- common/autotest_common.sh@10 -- # set +x 00:23:34.783 07:24:08 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:34.783 07:24:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:34.783 07:24:08 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:34.783 07:24:08 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:34.783 07:24:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:34.783 07:24:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:34.783 07:24:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:34.784 07:24:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:34.784 "name": "raid_bdev1", 00:23:34.784 "uuid": "64a1d89c-d12d-4123-bade-6f4076e7e404", 00:23:34.784 "strip_size_kb": 64, 00:23:34.784 "state": "online", 00:23:34.784 "raid_level": "raid5f", 00:23:34.784 "superblock": false, 00:23:34.784 "num_base_bdevs": 3, 00:23:34.784 "num_base_bdevs_discovered": 2, 00:23:34.784 "num_base_bdevs_operational": 2, 00:23:34.784 "base_bdevs_list": [ 00:23:34.784 { 00:23:34.784 "name": null, 00:23:34.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:34.784 "is_configured": false, 00:23:34.784 "data_offset": 0, 00:23:34.784 "data_size": 65536 00:23:34.784 }, 00:23:34.784 { 00:23:34.784 "name": "BaseBdev2", 00:23:34.784 "uuid": "7cfaed8d-ac7e-4a38-8d6d-df232cda8e87", 00:23:34.784 "is_configured": true, 00:23:34.784 "data_offset": 0, 00:23:34.784 "data_size": 65536 00:23:34.784 }, 00:23:34.784 { 00:23:34.784 "name": "BaseBdev3", 00:23:34.784 "uuid": "2034eb7c-5cef-45db-9e6e-79f53e1647e2", 00:23:34.784 "is_configured": true, 00:23:34.784 "data_offset": 0, 00:23:34.784 "data_size": 65536 00:23:34.784 } 00:23:34.784 ] 00:23:34.784 }' 00:23:34.784 07:24:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:35.042 07:24:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:35.042 07:24:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:35.042 07:24:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:35.042 07:24:08 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:35.301 [2024-02-13 07:24:08.783317] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:35.301 [2024-02-13 07:24:08.783477] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:35.301 [2024-02-13 07:24:08.794725] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d150 00:23:35.301 [2024-02-13 07:24:08.800570] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:35.301 07:24:08 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:36.239 07:24:09 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.239 07:24:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:36.239 07:24:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:36.239 07:24:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:36.239 07:24:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:36.239 07:24:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.239 07:24:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.501 07:24:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:36.501 "name": "raid_bdev1", 00:23:36.501 "uuid": "64a1d89c-d12d-4123-bade-6f4076e7e404", 00:23:36.501 "strip_size_kb": 64, 00:23:36.501 "state": "online", 00:23:36.501 "raid_level": "raid5f", 00:23:36.501 "superblock": false, 00:23:36.501 "num_base_bdevs": 3, 00:23:36.501 "num_base_bdevs_discovered": 3, 00:23:36.501 "num_base_bdevs_operational": 3, 00:23:36.501 "process": { 00:23:36.501 "type": "rebuild", 00:23:36.501 "target": "spare", 00:23:36.501 "progress": { 00:23:36.501 "blocks": 24576, 00:23:36.501 "percent": 18 00:23:36.501 } 00:23:36.501 }, 00:23:36.501 "base_bdevs_list": [ 00:23:36.501 { 00:23:36.501 "name": "spare", 00:23:36.501 "uuid": "14a23442-e53f-5573-a05b-481df067d10b", 00:23:36.501 "is_configured": true, 00:23:36.501 "data_offset": 0, 00:23:36.501 "data_size": 65536 00:23:36.501 }, 00:23:36.501 { 00:23:36.501 "name": "BaseBdev2", 00:23:36.501 "uuid": "7cfaed8d-ac7e-4a38-8d6d-df232cda8e87", 00:23:36.501 "is_configured": true, 00:23:36.501 "data_offset": 0, 00:23:36.501 "data_size": 65536 00:23:36.501 }, 00:23:36.501 { 00:23:36.501 "name": "BaseBdev3", 00:23:36.501 "uuid": "2034eb7c-5cef-45db-9e6e-79f53e1647e2", 00:23:36.501 "is_configured": true, 00:23:36.501 "data_offset": 0, 00:23:36.501 "data_size": 65536 00:23:36.501 } 00:23:36.501 ] 00:23:36.501 }' 00:23:36.501 07:24:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:36.501 07:24:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:36.501 07:24:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:36.760 07:24:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:36.760 07:24:10 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:23:36.760 07:24:10 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:23:36.760 07:24:10 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:23:36.760 07:24:10 -- bdev/bdev_raid.sh@657 -- # local timeout=631 00:23:36.760 07:24:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:36.760 07:24:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:36.760 07:24:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:36.760 07:24:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:36.760 07:24:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:36.760 07:24:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:36.760 07:24:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:36.760 07:24:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:36.760 07:24:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:36.760 "name": "raid_bdev1", 00:23:36.760 "uuid": "64a1d89c-d12d-4123-bade-6f4076e7e404", 00:23:36.760 "strip_size_kb": 64, 00:23:36.760 "state": "online", 00:23:36.760 "raid_level": "raid5f", 00:23:36.760 "superblock": false, 00:23:36.760 "num_base_bdevs": 3, 00:23:36.760 "num_base_bdevs_discovered": 3, 00:23:36.760 "num_base_bdevs_operational": 3, 00:23:36.760 "process": { 00:23:36.760 "type": "rebuild", 00:23:36.760 "target": "spare", 00:23:36.760 "progress": { 00:23:36.760 "blocks": 30720, 00:23:36.760 "percent": 23 00:23:36.760 } 00:23:36.760 }, 00:23:36.760 "base_bdevs_list": [ 00:23:36.760 { 00:23:36.760 "name": "spare", 00:23:36.760 "uuid": "14a23442-e53f-5573-a05b-481df067d10b", 00:23:36.760 "is_configured": true, 00:23:36.760 "data_offset": 0, 00:23:36.760 "data_size": 65536 00:23:36.760 }, 00:23:36.760 { 00:23:36.760 "name": "BaseBdev2", 00:23:36.760 "uuid": "7cfaed8d-ac7e-4a38-8d6d-df232cda8e87", 00:23:36.760 "is_configured": true, 00:23:36.760 "data_offset": 0, 00:23:36.760 "data_size": 65536 00:23:36.760 }, 00:23:36.760 { 00:23:36.760 "name": "BaseBdev3", 00:23:36.760 "uuid": "2034eb7c-5cef-45db-9e6e-79f53e1647e2", 00:23:36.760 "is_configured": true, 00:23:36.761 "data_offset": 0, 00:23:36.761 "data_size": 65536 00:23:36.761 } 00:23:36.761 ] 00:23:36.761 }' 00:23:36.761 07:24:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:37.019 07:24:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:37.019 07:24:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:37.019 07:24:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:37.019 07:24:10 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:37.955 07:24:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:37.955 07:24:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:37.955 07:24:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:37.955 07:24:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:37.955 07:24:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:37.955 07:24:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:37.955 07:24:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:37.955 07:24:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:38.214 07:24:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:38.214 "name": "raid_bdev1", 00:23:38.214 "uuid": "64a1d89c-d12d-4123-bade-6f4076e7e404", 00:23:38.214 "strip_size_kb": 64, 00:23:38.214 "state": "online", 00:23:38.214 "raid_level": "raid5f", 00:23:38.214 "superblock": false, 00:23:38.214 "num_base_bdevs": 3, 00:23:38.214 "num_base_bdevs_discovered": 3, 00:23:38.214 "num_base_bdevs_operational": 3, 00:23:38.214 "process": { 00:23:38.214 "type": "rebuild", 00:23:38.214 "target": "spare", 00:23:38.214 "progress": { 00:23:38.214 "blocks": 59392, 00:23:38.214 "percent": 45 00:23:38.214 } 00:23:38.214 }, 00:23:38.214 "base_bdevs_list": [ 00:23:38.214 { 00:23:38.214 "name": "spare", 00:23:38.214 "uuid": "14a23442-e53f-5573-a05b-481df067d10b", 00:23:38.214 "is_configured": true, 00:23:38.214 "data_offset": 0, 00:23:38.214 "data_size": 65536 00:23:38.214 }, 00:23:38.214 { 00:23:38.214 "name": "BaseBdev2", 00:23:38.214 "uuid": "7cfaed8d-ac7e-4a38-8d6d-df232cda8e87", 00:23:38.214 "is_configured": true, 00:23:38.214 "data_offset": 0, 00:23:38.214 "data_size": 65536 00:23:38.214 }, 00:23:38.214 { 00:23:38.214 "name": "BaseBdev3", 00:23:38.214 "uuid": "2034eb7c-5cef-45db-9e6e-79f53e1647e2", 00:23:38.214 "is_configured": true, 00:23:38.214 "data_offset": 0, 00:23:38.214 "data_size": 65536 00:23:38.214 } 00:23:38.214 ] 00:23:38.214 }' 00:23:38.214 07:24:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:38.214 07:24:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:38.214 07:24:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:38.472 07:24:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:38.472 07:24:11 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:39.407 07:24:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:39.407 07:24:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:39.407 07:24:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:39.407 07:24:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:39.407 07:24:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:39.407 07:24:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:39.407 07:24:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:39.407 07:24:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:39.666 07:24:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:39.666 "name": "raid_bdev1", 00:23:39.666 "uuid": "64a1d89c-d12d-4123-bade-6f4076e7e404", 00:23:39.666 "strip_size_kb": 64, 00:23:39.666 "state": "online", 00:23:39.666 "raid_level": "raid5f", 00:23:39.666 "superblock": false, 00:23:39.666 "num_base_bdevs": 3, 00:23:39.666 "num_base_bdevs_discovered": 3, 00:23:39.666 "num_base_bdevs_operational": 3, 00:23:39.666 "process": { 00:23:39.666 "type": "rebuild", 00:23:39.666 "target": "spare", 00:23:39.666 "progress": { 00:23:39.666 "blocks": 88064, 00:23:39.666 "percent": 67 00:23:39.666 } 00:23:39.666 }, 00:23:39.666 "base_bdevs_list": [ 00:23:39.666 { 00:23:39.666 "name": "spare", 00:23:39.666 "uuid": "14a23442-e53f-5573-a05b-481df067d10b", 00:23:39.666 "is_configured": true, 00:23:39.666 "data_offset": 0, 00:23:39.666 "data_size": 65536 00:23:39.666 }, 00:23:39.666 { 00:23:39.666 "name": "BaseBdev2", 00:23:39.666 "uuid": "7cfaed8d-ac7e-4a38-8d6d-df232cda8e87", 00:23:39.666 "is_configured": true, 00:23:39.666 "data_offset": 0, 00:23:39.666 "data_size": 65536 00:23:39.666 }, 00:23:39.666 { 00:23:39.666 "name": "BaseBdev3", 00:23:39.666 "uuid": "2034eb7c-5cef-45db-9e6e-79f53e1647e2", 00:23:39.666 "is_configured": true, 00:23:39.666 "data_offset": 0, 00:23:39.666 "data_size": 65536 00:23:39.666 } 00:23:39.666 ] 00:23:39.666 }' 00:23:39.666 07:24:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:39.666 07:24:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:39.666 07:24:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:39.666 07:24:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:39.666 07:24:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:40.601 07:24:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:40.601 07:24:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:40.601 07:24:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:40.601 07:24:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:40.601 07:24:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:40.601 07:24:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:40.601 07:24:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:40.601 07:24:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:40.860 07:24:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:40.860 "name": "raid_bdev1", 00:23:40.860 "uuid": "64a1d89c-d12d-4123-bade-6f4076e7e404", 00:23:40.860 "strip_size_kb": 64, 00:23:40.860 "state": "online", 00:23:40.860 "raid_level": "raid5f", 00:23:40.860 "superblock": false, 00:23:40.860 "num_base_bdevs": 3, 00:23:40.860 "num_base_bdevs_discovered": 3, 00:23:40.860 "num_base_bdevs_operational": 3, 00:23:40.860 "process": { 00:23:40.860 "type": "rebuild", 00:23:40.860 "target": "spare", 00:23:40.860 "progress": { 00:23:40.860 "blocks": 114688, 00:23:40.860 "percent": 87 00:23:40.860 } 00:23:40.860 }, 00:23:40.860 "base_bdevs_list": [ 00:23:40.860 { 00:23:40.860 "name": "spare", 00:23:40.860 "uuid": "14a23442-e53f-5573-a05b-481df067d10b", 00:23:40.860 "is_configured": true, 00:23:40.860 "data_offset": 0, 00:23:40.860 "data_size": 65536 00:23:40.860 }, 00:23:40.860 { 00:23:40.860 "name": "BaseBdev2", 00:23:40.860 "uuid": "7cfaed8d-ac7e-4a38-8d6d-df232cda8e87", 00:23:40.860 "is_configured": true, 00:23:40.860 "data_offset": 0, 00:23:40.860 "data_size": 65536 00:23:40.860 }, 00:23:40.860 { 00:23:40.860 "name": "BaseBdev3", 00:23:40.860 "uuid": "2034eb7c-5cef-45db-9e6e-79f53e1647e2", 00:23:40.860 "is_configured": true, 00:23:40.860 "data_offset": 0, 00:23:40.860 "data_size": 65536 00:23:40.860 } 00:23:40.860 ] 00:23:40.860 }' 00:23:40.860 07:24:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:40.860 07:24:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:40.860 07:24:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:41.118 07:24:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:41.119 07:24:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:41.686 [2024-02-13 07:24:15.254308] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:23:41.686 [2024-02-13 07:24:15.254596] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:23:41.686 [2024-02-13 07:24:15.254886] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:41.944 07:24:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:41.944 07:24:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:41.944 07:24:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:41.944 07:24:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:41.944 07:24:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:41.944 07:24:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:41.944 07:24:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:41.944 07:24:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.203 07:24:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:42.203 "name": "raid_bdev1", 00:23:42.203 "uuid": "64a1d89c-d12d-4123-bade-6f4076e7e404", 00:23:42.203 "strip_size_kb": 64, 00:23:42.203 "state": "online", 00:23:42.203 "raid_level": "raid5f", 00:23:42.203 "superblock": false, 00:23:42.203 "num_base_bdevs": 3, 00:23:42.203 "num_base_bdevs_discovered": 3, 00:23:42.203 "num_base_bdevs_operational": 3, 00:23:42.203 "base_bdevs_list": [ 00:23:42.203 { 00:23:42.203 "name": "spare", 00:23:42.203 "uuid": "14a23442-e53f-5573-a05b-481df067d10b", 00:23:42.203 "is_configured": true, 00:23:42.203 "data_offset": 0, 00:23:42.203 "data_size": 65536 00:23:42.203 }, 00:23:42.203 { 00:23:42.203 "name": "BaseBdev2", 00:23:42.203 "uuid": "7cfaed8d-ac7e-4a38-8d6d-df232cda8e87", 00:23:42.203 "is_configured": true, 00:23:42.203 "data_offset": 0, 00:23:42.203 "data_size": 65536 00:23:42.203 }, 00:23:42.203 { 00:23:42.203 "name": "BaseBdev3", 00:23:42.203 "uuid": "2034eb7c-5cef-45db-9e6e-79f53e1647e2", 00:23:42.203 "is_configured": true, 00:23:42.203 "data_offset": 0, 00:23:42.203 "data_size": 65536 00:23:42.203 } 00:23:42.203 ] 00:23:42.203 }' 00:23:42.203 07:24:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:42.203 07:24:15 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:23:42.203 07:24:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:42.461 07:24:15 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:23:42.461 07:24:15 -- bdev/bdev_raid.sh@660 -- # break 00:23:42.461 07:24:15 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:42.461 07:24:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:42.461 07:24:15 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:42.461 07:24:15 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:42.461 07:24:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:42.461 07:24:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.462 07:24:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.462 07:24:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:42.462 "name": "raid_bdev1", 00:23:42.462 "uuid": "64a1d89c-d12d-4123-bade-6f4076e7e404", 00:23:42.462 "strip_size_kb": 64, 00:23:42.462 "state": "online", 00:23:42.462 "raid_level": "raid5f", 00:23:42.462 "superblock": false, 00:23:42.462 "num_base_bdevs": 3, 00:23:42.462 "num_base_bdevs_discovered": 3, 00:23:42.462 "num_base_bdevs_operational": 3, 00:23:42.462 "base_bdevs_list": [ 00:23:42.462 { 00:23:42.462 "name": "spare", 00:23:42.462 "uuid": "14a23442-e53f-5573-a05b-481df067d10b", 00:23:42.462 "is_configured": true, 00:23:42.462 "data_offset": 0, 00:23:42.462 "data_size": 65536 00:23:42.462 }, 00:23:42.462 { 00:23:42.462 "name": "BaseBdev2", 00:23:42.462 "uuid": "7cfaed8d-ac7e-4a38-8d6d-df232cda8e87", 00:23:42.462 "is_configured": true, 00:23:42.462 "data_offset": 0, 00:23:42.462 "data_size": 65536 00:23:42.462 }, 00:23:42.462 { 00:23:42.462 "name": "BaseBdev3", 00:23:42.462 "uuid": "2034eb7c-5cef-45db-9e6e-79f53e1647e2", 00:23:42.462 "is_configured": true, 00:23:42.462 "data_offset": 0, 00:23:42.462 "data_size": 65536 00:23:42.462 } 00:23:42.462 ] 00:23:42.462 }' 00:23:42.462 07:24:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:42.720 07:24:16 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:42.720 07:24:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:42.720 07:24:16 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:42.720 07:24:16 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:42.720 07:24:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:42.720 07:24:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:42.720 07:24:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:42.720 07:24:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:42.720 07:24:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:42.720 07:24:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:42.720 07:24:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:42.720 07:24:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:42.720 07:24:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:42.720 07:24:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:42.720 07:24:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:42.979 07:24:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:42.979 "name": "raid_bdev1", 00:23:42.979 "uuid": "64a1d89c-d12d-4123-bade-6f4076e7e404", 00:23:42.979 "strip_size_kb": 64, 00:23:42.979 "state": "online", 00:23:42.979 "raid_level": "raid5f", 00:23:42.979 "superblock": false, 00:23:42.979 "num_base_bdevs": 3, 00:23:42.979 "num_base_bdevs_discovered": 3, 00:23:42.980 "num_base_bdevs_operational": 3, 00:23:42.980 "base_bdevs_list": [ 00:23:42.980 { 00:23:42.980 "name": "spare", 00:23:42.980 "uuid": "14a23442-e53f-5573-a05b-481df067d10b", 00:23:42.980 "is_configured": true, 00:23:42.980 "data_offset": 0, 00:23:42.980 "data_size": 65536 00:23:42.980 }, 00:23:42.980 { 00:23:42.980 "name": "BaseBdev2", 00:23:42.980 "uuid": "7cfaed8d-ac7e-4a38-8d6d-df232cda8e87", 00:23:42.980 "is_configured": true, 00:23:42.980 "data_offset": 0, 00:23:42.980 "data_size": 65536 00:23:42.980 }, 00:23:42.980 { 00:23:42.980 "name": "BaseBdev3", 00:23:42.980 "uuid": "2034eb7c-5cef-45db-9e6e-79f53e1647e2", 00:23:42.980 "is_configured": true, 00:23:42.980 "data_offset": 0, 00:23:42.980 "data_size": 65536 00:23:42.980 } 00:23:42.980 ] 00:23:42.980 }' 00:23:42.980 07:24:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:42.980 07:24:16 -- common/autotest_common.sh@10 -- # set +x 00:23:43.547 07:24:17 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:23:43.805 [2024-02-13 07:24:17.263123] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:23:43.806 [2024-02-13 07:24:17.263262] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:23:43.806 [2024-02-13 07:24:17.263440] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:23:43.806 [2024-02-13 07:24:17.263611] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:23:43.806 [2024-02-13 07:24:17.263707] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000008a80 name raid_bdev1, state offline 00:23:43.806 07:24:17 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:43.806 07:24:17 -- bdev/bdev_raid.sh@671 -- # jq length 00:23:43.806 07:24:17 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:23:43.806 07:24:17 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:23:43.806 07:24:17 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:23:43.806 07:24:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:43.806 07:24:17 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:43.806 07:24:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:43.806 07:24:17 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:43.806 07:24:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:43.806 07:24:17 -- bdev/nbd_common.sh@12 -- # local i 00:23:43.806 07:24:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:43.806 07:24:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:43.806 07:24:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:23:44.064 /dev/nbd0 00:23:44.064 07:24:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:44.064 07:24:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:44.064 07:24:17 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:23:44.064 07:24:17 -- common/autotest_common.sh@855 -- # local i 00:23:44.064 07:24:17 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:44.064 07:24:17 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:44.064 07:24:17 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:23:44.064 07:24:17 -- common/autotest_common.sh@859 -- # break 00:23:44.064 07:24:17 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:44.064 07:24:17 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:44.065 07:24:17 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:44.065 1+0 records in 00:23:44.065 1+0 records out 00:23:44.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293425 s, 14.0 MB/s 00:23:44.065 07:24:17 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:44.065 07:24:17 -- common/autotest_common.sh@872 -- # size=4096 00:23:44.065 07:24:17 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:44.065 07:24:17 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:44.065 07:24:17 -- common/autotest_common.sh@875 -- # return 0 00:23:44.065 07:24:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:44.065 07:24:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:44.065 07:24:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:23:44.324 /dev/nbd1 00:23:44.324 07:24:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:44.324 07:24:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:44.324 07:24:17 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:23:44.324 07:24:17 -- common/autotest_common.sh@855 -- # local i 00:23:44.324 07:24:17 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:44.324 07:24:17 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:44.324 07:24:17 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:23:44.324 07:24:17 -- common/autotest_common.sh@859 -- # break 00:23:44.324 07:24:17 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:44.324 07:24:17 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:44.324 07:24:17 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:44.324 1+0 records in 00:23:44.324 1+0 records out 00:23:44.324 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000495983 s, 8.3 MB/s 00:23:44.324 07:24:17 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:44.324 07:24:17 -- common/autotest_common.sh@872 -- # size=4096 00:23:44.324 07:24:17 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:44.324 07:24:17 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:44.324 07:24:17 -- common/autotest_common.sh@875 -- # return 0 00:23:44.324 07:24:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:44.324 07:24:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:23:44.324 07:24:17 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:23:44.582 07:24:18 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:23:44.582 07:24:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:44.582 07:24:18 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:44.582 07:24:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:44.582 07:24:18 -- bdev/nbd_common.sh@51 -- # local i 00:23:44.582 07:24:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:44.582 07:24:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:44.840 07:24:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:44.840 07:24:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:44.840 07:24:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:44.840 07:24:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:44.840 07:24:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:44.840 07:24:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:44.840 07:24:18 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:44.840 07:24:18 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:44.840 07:24:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:44.840 07:24:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:44.840 07:24:18 -- bdev/nbd_common.sh@41 -- # break 00:23:44.840 07:24:18 -- bdev/nbd_common.sh@45 -- # return 0 00:23:44.840 07:24:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:44.840 07:24:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:23:45.099 07:24:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:45.099 07:24:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:45.099 07:24:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:45.099 07:24:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:45.099 07:24:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:45.099 07:24:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:45.099 07:24:18 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:45.358 07:24:18 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:45.358 07:24:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:45.358 07:24:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:45.358 07:24:18 -- bdev/nbd_common.sh@41 -- # break 00:23:45.358 07:24:18 -- bdev/nbd_common.sh@45 -- # return 0 00:23:45.358 07:24:18 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:23:45.358 07:24:18 -- bdev/bdev_raid.sh@709 -- # killprocess 133653 00:23:45.358 07:24:18 -- common/autotest_common.sh@924 -- # '[' -z 133653 ']' 00:23:45.358 07:24:18 -- common/autotest_common.sh@928 -- # kill -0 133653 00:23:45.358 07:24:18 -- common/autotest_common.sh@929 -- # uname 00:23:45.358 07:24:18 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:23:45.358 07:24:18 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 133653 00:23:45.358 07:24:18 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:23:45.358 07:24:18 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:23:45.358 07:24:18 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 133653' 00:23:45.358 killing process with pid 133653 00:23:45.358 07:24:18 -- common/autotest_common.sh@943 -- # kill 133653 00:23:45.358 07:24:18 -- common/autotest_common.sh@948 -- # wait 133653 00:23:45.358 Received shutdown signal, test time was about 60.000000 seconds 00:23:45.358 00:23:45.358 Latency(us) 00:23:45.358 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.358 =================================================================================================================== 00:23:45.358 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:45.358 [2024-02-13 07:24:18.851816] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:23:45.617 [2024-02-13 07:24:19.128251] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:23:46.554 ************************************ 00:23:46.554 END TEST raid5f_rebuild_test 00:23:46.554 ************************************ 00:23:46.554 07:24:20 -- bdev/bdev_raid.sh@711 -- # return 0 00:23:46.554 00:23:46.554 real 0m20.740s 00:23:46.554 user 0m31.192s 00:23:46.554 sys 0m2.436s 00:23:46.554 07:24:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:46.554 07:24:20 -- common/autotest_common.sh@10 -- # set +x 00:23:46.554 07:24:20 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false 00:23:46.554 07:24:20 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:23:46.554 07:24:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:23:46.555 07:24:20 -- common/autotest_common.sh@10 -- # set +x 00:23:46.555 ************************************ 00:23:46.555 START TEST raid5f_rebuild_test_sb 00:23:46.555 ************************************ 00:23:46.555 07:24:20 -- common/autotest_common.sh@1102 -- # raid_rebuild_test raid5f 3 true false 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=3 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@544 -- # raid_pid=134240 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:23:46.555 07:24:20 -- bdev/bdev_raid.sh@545 -- # waitforlisten 134240 /var/tmp/spdk-raid.sock 00:23:46.555 07:24:20 -- common/autotest_common.sh@817 -- # '[' -z 134240 ']' 00:23:46.555 07:24:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:23:46.555 07:24:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:46.555 07:24:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:23:46.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:23:46.555 07:24:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:46.555 07:24:20 -- common/autotest_common.sh@10 -- # set +x 00:23:46.555 [2024-02-13 07:24:20.221952] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:23:46.555 [2024-02-13 07:24:20.222285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134240 ] 00:23:46.555 I/O size of 3145728 is greater than zero copy threshold (65536). 00:23:46.555 Zero copy mechanism will not be used. 00:23:46.814 [2024-02-13 07:24:20.375098] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.073 [2024-02-13 07:24:20.553001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.073 [2024-02-13 07:24:20.737585] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:23:47.640 07:24:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:47.640 07:24:21 -- common/autotest_common.sh@850 -- # return 0 00:23:47.640 07:24:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:47.640 07:24:21 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:47.640 07:24:21 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:23:47.640 BaseBdev1_malloc 00:23:47.640 07:24:21 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:23:47.899 [2024-02-13 07:24:21.530097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:23:47.899 [2024-02-13 07:24:21.530384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:47.899 [2024-02-13 07:24:21.530532] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:23:47.899 [2024-02-13 07:24:21.530710] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:47.899 [2024-02-13 07:24:21.533158] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:47.899 [2024-02-13 07:24:21.533343] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:23:47.899 BaseBdev1 00:23:47.899 07:24:21 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:47.899 07:24:21 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:47.899 07:24:21 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:23:48.159 BaseBdev2_malloc 00:23:48.159 07:24:21 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:23:48.446 [2024-02-13 07:24:22.064526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:23:48.446 [2024-02-13 07:24:22.064702] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.446 [2024-02-13 07:24:22.064771] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:48.446 [2024-02-13 07:24:22.064902] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.446 [2024-02-13 07:24:22.067081] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.446 [2024-02-13 07:24:22.067254] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:23:48.446 BaseBdev2 00:23:48.446 07:24:22 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:23:48.446 07:24:22 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:23:48.446 07:24:22 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:23:48.704 BaseBdev3_malloc 00:23:48.704 07:24:22 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:23:48.962 [2024-02-13 07:24:22.472279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:23:48.962 [2024-02-13 07:24:22.472467] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:48.962 [2024-02-13 07:24:22.472533] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:23:48.962 [2024-02-13 07:24:22.472663] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:48.962 [2024-02-13 07:24:22.474903] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:48.962 [2024-02-13 07:24:22.475076] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:23:48.962 BaseBdev3 00:23:48.962 07:24:22 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:23:49.221 spare_malloc 00:23:49.221 07:24:22 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:23:49.221 spare_delay 00:23:49.221 07:24:22 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:23:49.479 [2024-02-13 07:24:23.078271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:23:49.479 [2024-02-13 07:24:23.078466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:49.479 [2024-02-13 07:24:23.078526] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:23:49.479 [2024-02-13 07:24:23.078656] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:49.479 [2024-02-13 07:24:23.080803] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:49.479 [2024-02-13 07:24:23.080976] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:23:49.479 spare 00:23:49.479 07:24:23 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 00:23:49.738 [2024-02-13 07:24:23.322454] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:23:49.738 [2024-02-13 07:24:23.324286] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:23:49.738 [2024-02-13 07:24:23.324503] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:23:49.738 [2024-02-13 07:24:23.324775] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009c80 00:23:49.738 [2024-02-13 07:24:23.324840] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:23:49.738 [2024-02-13 07:24:23.325052] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:23:49.738 [2024-02-13 07:24:23.329701] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009c80 00:23:49.738 [2024-02-13 07:24:23.329843] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009c80 00:23:49.738 [2024-02-13 07:24:23.330128] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:49.738 07:24:23 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:23:49.738 07:24:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:49.738 07:24:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:49.738 07:24:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:49.738 07:24:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:49.738 07:24:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:23:49.738 07:24:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:49.738 07:24:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:49.738 07:24:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:49.738 07:24:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:49.738 07:24:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:49.738 07:24:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:49.997 07:24:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:49.997 "name": "raid_bdev1", 00:23:49.997 "uuid": "e94804c9-1366-44dd-b547-3add67e3d097", 00:23:49.997 "strip_size_kb": 64, 00:23:49.997 "state": "online", 00:23:49.997 "raid_level": "raid5f", 00:23:49.997 "superblock": true, 00:23:49.997 "num_base_bdevs": 3, 00:23:49.997 "num_base_bdevs_discovered": 3, 00:23:49.997 "num_base_bdevs_operational": 3, 00:23:49.997 "base_bdevs_list": [ 00:23:49.997 { 00:23:49.997 "name": "BaseBdev1", 00:23:49.997 "uuid": "6f3a54ca-b44f-5f91-b6be-debd1359937d", 00:23:49.997 "is_configured": true, 00:23:49.997 "data_offset": 2048, 00:23:49.997 "data_size": 63488 00:23:49.997 }, 00:23:49.997 { 00:23:49.997 "name": "BaseBdev2", 00:23:49.997 "uuid": "8589ca56-6592-5eab-ada8-f8329c866208", 00:23:49.997 "is_configured": true, 00:23:49.997 "data_offset": 2048, 00:23:49.997 "data_size": 63488 00:23:49.997 }, 00:23:49.997 { 00:23:49.997 "name": "BaseBdev3", 00:23:49.997 "uuid": "4c8ecf00-8c17-57e5-b40d-16f04bb3c8f4", 00:23:49.997 "is_configured": true, 00:23:49.997 "data_offset": 2048, 00:23:49.997 "data_size": 63488 00:23:49.997 } 00:23:49.997 ] 00:23:49.997 }' 00:23:49.997 07:24:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:49.997 07:24:23 -- common/autotest_common.sh@10 -- # set +x 00:23:50.565 07:24:24 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:23:50.565 07:24:24 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:23:50.823 [2024-02-13 07:24:24.407845] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:23:50.823 07:24:24 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=126976 00:23:50.823 07:24:24 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:23:50.824 07:24:24 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:51.082 07:24:24 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:23:51.082 07:24:24 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:23:51.082 07:24:24 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:23:51.082 07:24:24 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:23:51.082 07:24:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:51.082 07:24:24 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:23:51.082 07:24:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:51.082 07:24:24 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:23:51.082 07:24:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:51.082 07:24:24 -- bdev/nbd_common.sh@12 -- # local i 00:23:51.082 07:24:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:51.082 07:24:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:51.082 07:24:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:23:51.341 [2024-02-13 07:24:24.783787] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:23:51.341 /dev/nbd0 00:23:51.341 07:24:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:51.341 07:24:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:51.341 07:24:24 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:23:51.341 07:24:24 -- common/autotest_common.sh@855 -- # local i 00:23:51.341 07:24:24 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:23:51.341 07:24:24 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:23:51.341 07:24:24 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:23:51.341 07:24:24 -- common/autotest_common.sh@859 -- # break 00:23:51.341 07:24:24 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:23:51.341 07:24:24 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:23:51.341 07:24:24 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:51.341 1+0 records in 00:23:51.341 1+0 records out 00:23:51.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327092 s, 12.5 MB/s 00:23:51.341 07:24:24 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:51.341 07:24:24 -- common/autotest_common.sh@872 -- # size=4096 00:23:51.341 07:24:24 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:51.341 07:24:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:23:51.341 07:24:24 -- common/autotest_common.sh@875 -- # return 0 00:23:51.341 07:24:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:51.341 07:24:24 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:23:51.341 07:24:24 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:23:51.341 07:24:24 -- bdev/bdev_raid.sh@581 -- # write_unit_size=256 00:23:51.341 07:24:24 -- bdev/bdev_raid.sh@582 -- # echo 128 00:23:51.341 07:24:24 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:23:51.600 496+0 records in 00:23:51.600 496+0 records out 00:23:51.600 65011712 bytes (65 MB, 62 MiB) copied, 0.369366 s, 176 MB/s 00:23:51.600 07:24:25 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:23:51.600 07:24:25 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:23:51.600 07:24:25 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:23:51.600 07:24:25 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:51.600 07:24:25 -- bdev/nbd_common.sh@51 -- # local i 00:23:51.600 07:24:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:51.600 07:24:25 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:23:51.859 07:24:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:51.859 07:24:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:51.859 07:24:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:51.859 07:24:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:51.859 07:24:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:51.859 07:24:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:51.859 07:24:25 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:23:51.859 [2024-02-13 07:24:25.491404] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:52.118 07:24:25 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:23:52.118 07:24:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:52.118 07:24:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:52.118 07:24:25 -- bdev/nbd_common.sh@41 -- # break 00:23:52.118 07:24:25 -- bdev/nbd_common.sh@45 -- # return 0 00:23:52.118 07:24:25 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:23:52.118 [2024-02-13 07:24:25.769028] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:23:52.118 07:24:25 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:52.118 07:24:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:52.118 07:24:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:52.118 07:24:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:52.118 07:24:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:52.118 07:24:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:52.118 07:24:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:52.118 07:24:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:52.118 07:24:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:52.118 07:24:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:52.118 07:24:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:52.118 07:24:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:52.377 07:24:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:52.377 "name": "raid_bdev1", 00:23:52.377 "uuid": "e94804c9-1366-44dd-b547-3add67e3d097", 00:23:52.377 "strip_size_kb": 64, 00:23:52.377 "state": "online", 00:23:52.377 "raid_level": "raid5f", 00:23:52.377 "superblock": true, 00:23:52.377 "num_base_bdevs": 3, 00:23:52.377 "num_base_bdevs_discovered": 2, 00:23:52.377 "num_base_bdevs_operational": 2, 00:23:52.377 "base_bdevs_list": [ 00:23:52.377 { 00:23:52.377 "name": null, 00:23:52.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:52.377 "is_configured": false, 00:23:52.377 "data_offset": 2048, 00:23:52.377 "data_size": 63488 00:23:52.377 }, 00:23:52.377 { 00:23:52.377 "name": "BaseBdev2", 00:23:52.377 "uuid": "8589ca56-6592-5eab-ada8-f8329c866208", 00:23:52.377 "is_configured": true, 00:23:52.377 "data_offset": 2048, 00:23:52.377 "data_size": 63488 00:23:52.377 }, 00:23:52.377 { 00:23:52.377 "name": "BaseBdev3", 00:23:52.377 "uuid": "4c8ecf00-8c17-57e5-b40d-16f04bb3c8f4", 00:23:52.377 "is_configured": true, 00:23:52.377 "data_offset": 2048, 00:23:52.377 "data_size": 63488 00:23:52.377 } 00:23:52.377 ] 00:23:52.377 }' 00:23:52.377 07:24:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:52.377 07:24:26 -- common/autotest_common.sh@10 -- # set +x 00:23:52.945 07:24:26 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:53.204 [2024-02-13 07:24:26.809363] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:53.204 [2024-02-13 07:24:26.809426] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:53.204 [2024-02-13 07:24:26.820357] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002acc0 00:23:53.204 [2024-02-13 07:24:26.826059] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:53.204 07:24:26 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:23:54.140 07:24:27 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:54.140 07:24:27 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:54.140 07:24:27 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:54.140 07:24:27 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:54.140 07:24:27 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:54.140 07:24:27 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.140 07:24:27 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:54.399 07:24:28 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:54.399 "name": "raid_bdev1", 00:23:54.399 "uuid": "e94804c9-1366-44dd-b547-3add67e3d097", 00:23:54.399 "strip_size_kb": 64, 00:23:54.399 "state": "online", 00:23:54.399 "raid_level": "raid5f", 00:23:54.399 "superblock": true, 00:23:54.399 "num_base_bdevs": 3, 00:23:54.399 "num_base_bdevs_discovered": 3, 00:23:54.399 "num_base_bdevs_operational": 3, 00:23:54.399 "process": { 00:23:54.399 "type": "rebuild", 00:23:54.399 "target": "spare", 00:23:54.399 "progress": { 00:23:54.399 "blocks": 24576, 00:23:54.399 "percent": 19 00:23:54.399 } 00:23:54.399 }, 00:23:54.399 "base_bdevs_list": [ 00:23:54.399 { 00:23:54.399 "name": "spare", 00:23:54.399 "uuid": "cb1e2605-1616-5ad4-9389-a70c12d04c4d", 00:23:54.399 "is_configured": true, 00:23:54.400 "data_offset": 2048, 00:23:54.400 "data_size": 63488 00:23:54.400 }, 00:23:54.400 { 00:23:54.400 "name": "BaseBdev2", 00:23:54.400 "uuid": "8589ca56-6592-5eab-ada8-f8329c866208", 00:23:54.400 "is_configured": true, 00:23:54.400 "data_offset": 2048, 00:23:54.400 "data_size": 63488 00:23:54.400 }, 00:23:54.400 { 00:23:54.400 "name": "BaseBdev3", 00:23:54.400 "uuid": "4c8ecf00-8c17-57e5-b40d-16f04bb3c8f4", 00:23:54.400 "is_configured": true, 00:23:54.400 "data_offset": 2048, 00:23:54.400 "data_size": 63488 00:23:54.400 } 00:23:54.400 ] 00:23:54.400 }' 00:23:54.400 07:24:28 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:54.658 07:24:28 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:54.658 07:24:28 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:54.658 07:24:28 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:54.658 07:24:28 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:23:54.917 [2024-02-13 07:24:28.367414] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:23:54.917 [2024-02-13 07:24:28.440492] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:23:54.917 [2024-02-13 07:24:28.440595] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:23:54.917 07:24:28 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:23:54.917 07:24:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:23:54.917 07:24:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:23:54.917 07:24:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:23:54.917 07:24:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:23:54.917 07:24:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:23:54.917 07:24:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:23:54.917 07:24:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:23:54.917 07:24:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:23:54.917 07:24:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:23:54.917 07:24:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:54.917 07:24:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:55.176 07:24:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:23:55.176 "name": "raid_bdev1", 00:23:55.176 "uuid": "e94804c9-1366-44dd-b547-3add67e3d097", 00:23:55.176 "strip_size_kb": 64, 00:23:55.176 "state": "online", 00:23:55.176 "raid_level": "raid5f", 00:23:55.176 "superblock": true, 00:23:55.176 "num_base_bdevs": 3, 00:23:55.176 "num_base_bdevs_discovered": 2, 00:23:55.176 "num_base_bdevs_operational": 2, 00:23:55.176 "base_bdevs_list": [ 00:23:55.176 { 00:23:55.176 "name": null, 00:23:55.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:55.176 "is_configured": false, 00:23:55.176 "data_offset": 2048, 00:23:55.176 "data_size": 63488 00:23:55.176 }, 00:23:55.176 { 00:23:55.176 "name": "BaseBdev2", 00:23:55.176 "uuid": "8589ca56-6592-5eab-ada8-f8329c866208", 00:23:55.176 "is_configured": true, 00:23:55.176 "data_offset": 2048, 00:23:55.176 "data_size": 63488 00:23:55.176 }, 00:23:55.176 { 00:23:55.176 "name": "BaseBdev3", 00:23:55.176 "uuid": "4c8ecf00-8c17-57e5-b40d-16f04bb3c8f4", 00:23:55.176 "is_configured": true, 00:23:55.176 "data_offset": 2048, 00:23:55.176 "data_size": 63488 00:23:55.176 } 00:23:55.176 ] 00:23:55.176 }' 00:23:55.176 07:24:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:23:55.176 07:24:28 -- common/autotest_common.sh@10 -- # set +x 00:23:55.744 07:24:29 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:23:55.744 07:24:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:55.744 07:24:29 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:23:55.744 07:24:29 -- bdev/bdev_raid.sh@185 -- # local target=none 00:23:55.744 07:24:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:55.744 07:24:29 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:55.744 07:24:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:56.003 07:24:29 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:56.003 "name": "raid_bdev1", 00:23:56.003 "uuid": "e94804c9-1366-44dd-b547-3add67e3d097", 00:23:56.003 "strip_size_kb": 64, 00:23:56.003 "state": "online", 00:23:56.003 "raid_level": "raid5f", 00:23:56.003 "superblock": true, 00:23:56.003 "num_base_bdevs": 3, 00:23:56.003 "num_base_bdevs_discovered": 2, 00:23:56.003 "num_base_bdevs_operational": 2, 00:23:56.003 "base_bdevs_list": [ 00:23:56.003 { 00:23:56.003 "name": null, 00:23:56.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:23:56.003 "is_configured": false, 00:23:56.003 "data_offset": 2048, 00:23:56.003 "data_size": 63488 00:23:56.003 }, 00:23:56.003 { 00:23:56.003 "name": "BaseBdev2", 00:23:56.003 "uuid": "8589ca56-6592-5eab-ada8-f8329c866208", 00:23:56.003 "is_configured": true, 00:23:56.003 "data_offset": 2048, 00:23:56.003 "data_size": 63488 00:23:56.003 }, 00:23:56.003 { 00:23:56.003 "name": "BaseBdev3", 00:23:56.003 "uuid": "4c8ecf00-8c17-57e5-b40d-16f04bb3c8f4", 00:23:56.003 "is_configured": true, 00:23:56.003 "data_offset": 2048, 00:23:56.003 "data_size": 63488 00:23:56.003 } 00:23:56.003 ] 00:23:56.003 }' 00:23:56.003 07:24:29 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:56.003 07:24:29 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:23:56.003 07:24:29 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:56.003 07:24:29 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:23:56.003 07:24:29 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:23:56.263 [2024-02-13 07:24:29.846625] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:23:56.263 [2024-02-13 07:24:29.846704] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:23:56.263 [2024-02-13 07:24:29.859205] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ae60 00:23:56.263 [2024-02-13 07:24:29.865870] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:23:56.263 07:24:29 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:23:57.200 07:24:30 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:57.200 07:24:30 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:57.200 07:24:30 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:57.200 07:24:30 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:57.200 07:24:30 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:57.200 07:24:30 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.200 07:24:30 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.459 07:24:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:57.459 "name": "raid_bdev1", 00:23:57.459 "uuid": "e94804c9-1366-44dd-b547-3add67e3d097", 00:23:57.459 "strip_size_kb": 64, 00:23:57.459 "state": "online", 00:23:57.459 "raid_level": "raid5f", 00:23:57.459 "superblock": true, 00:23:57.459 "num_base_bdevs": 3, 00:23:57.459 "num_base_bdevs_discovered": 3, 00:23:57.459 "num_base_bdevs_operational": 3, 00:23:57.459 "process": { 00:23:57.459 "type": "rebuild", 00:23:57.459 "target": "spare", 00:23:57.459 "progress": { 00:23:57.459 "blocks": 24576, 00:23:57.459 "percent": 19 00:23:57.459 } 00:23:57.459 }, 00:23:57.459 "base_bdevs_list": [ 00:23:57.459 { 00:23:57.459 "name": "spare", 00:23:57.459 "uuid": "cb1e2605-1616-5ad4-9389-a70c12d04c4d", 00:23:57.459 "is_configured": true, 00:23:57.459 "data_offset": 2048, 00:23:57.459 "data_size": 63488 00:23:57.459 }, 00:23:57.459 { 00:23:57.459 "name": "BaseBdev2", 00:23:57.459 "uuid": "8589ca56-6592-5eab-ada8-f8329c866208", 00:23:57.459 "is_configured": true, 00:23:57.459 "data_offset": 2048, 00:23:57.459 "data_size": 63488 00:23:57.459 }, 00:23:57.459 { 00:23:57.459 "name": "BaseBdev3", 00:23:57.459 "uuid": "4c8ecf00-8c17-57e5-b40d-16f04bb3c8f4", 00:23:57.459 "is_configured": true, 00:23:57.459 "data_offset": 2048, 00:23:57.459 "data_size": 63488 00:23:57.459 } 00:23:57.459 ] 00:23:57.459 }' 00:23:57.459 07:24:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:57.459 07:24:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:57.459 07:24:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:57.718 07:24:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:57.718 07:24:31 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:23:57.718 07:24:31 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:23:57.718 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:23:57.718 07:24:31 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=3 00:23:57.718 07:24:31 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:23:57.718 07:24:31 -- bdev/bdev_raid.sh@657 -- # local timeout=652 00:23:57.718 07:24:31 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:57.718 07:24:31 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:57.718 07:24:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:57.718 07:24:31 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:57.718 07:24:31 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:57.718 07:24:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:57.718 07:24:31 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:57.718 07:24:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:57.978 07:24:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:57.978 "name": "raid_bdev1", 00:23:57.978 "uuid": "e94804c9-1366-44dd-b547-3add67e3d097", 00:23:57.978 "strip_size_kb": 64, 00:23:57.978 "state": "online", 00:23:57.978 "raid_level": "raid5f", 00:23:57.978 "superblock": true, 00:23:57.978 "num_base_bdevs": 3, 00:23:57.978 "num_base_bdevs_discovered": 3, 00:23:57.978 "num_base_bdevs_operational": 3, 00:23:57.978 "process": { 00:23:57.978 "type": "rebuild", 00:23:57.978 "target": "spare", 00:23:57.978 "progress": { 00:23:57.978 "blocks": 30720, 00:23:57.978 "percent": 24 00:23:57.978 } 00:23:57.978 }, 00:23:57.978 "base_bdevs_list": [ 00:23:57.978 { 00:23:57.978 "name": "spare", 00:23:57.978 "uuid": "cb1e2605-1616-5ad4-9389-a70c12d04c4d", 00:23:57.978 "is_configured": true, 00:23:57.978 "data_offset": 2048, 00:23:57.978 "data_size": 63488 00:23:57.978 }, 00:23:57.978 { 00:23:57.978 "name": "BaseBdev2", 00:23:57.978 "uuid": "8589ca56-6592-5eab-ada8-f8329c866208", 00:23:57.978 "is_configured": true, 00:23:57.978 "data_offset": 2048, 00:23:57.978 "data_size": 63488 00:23:57.978 }, 00:23:57.978 { 00:23:57.978 "name": "BaseBdev3", 00:23:57.978 "uuid": "4c8ecf00-8c17-57e5-b40d-16f04bb3c8f4", 00:23:57.978 "is_configured": true, 00:23:57.978 "data_offset": 2048, 00:23:57.978 "data_size": 63488 00:23:57.978 } 00:23:57.978 ] 00:23:57.978 }' 00:23:57.978 07:24:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:57.978 07:24:31 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:57.978 07:24:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:57.978 07:24:31 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:57.978 07:24:31 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:23:58.914 07:24:32 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:23:58.914 07:24:32 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:23:58.914 07:24:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:23:58.914 07:24:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:23:58.914 07:24:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:23:58.914 07:24:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:23:58.914 07:24:32 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:23:58.914 07:24:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:23:59.172 07:24:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:23:59.172 "name": "raid_bdev1", 00:23:59.172 "uuid": "e94804c9-1366-44dd-b547-3add67e3d097", 00:23:59.172 "strip_size_kb": 64, 00:23:59.172 "state": "online", 00:23:59.172 "raid_level": "raid5f", 00:23:59.172 "superblock": true, 00:23:59.172 "num_base_bdevs": 3, 00:23:59.172 "num_base_bdevs_discovered": 3, 00:23:59.172 "num_base_bdevs_operational": 3, 00:23:59.172 "process": { 00:23:59.172 "type": "rebuild", 00:23:59.172 "target": "spare", 00:23:59.172 "progress": { 00:23:59.172 "blocks": 57344, 00:23:59.172 "percent": 45 00:23:59.172 } 00:23:59.172 }, 00:23:59.172 "base_bdevs_list": [ 00:23:59.172 { 00:23:59.172 "name": "spare", 00:23:59.172 "uuid": "cb1e2605-1616-5ad4-9389-a70c12d04c4d", 00:23:59.172 "is_configured": true, 00:23:59.172 "data_offset": 2048, 00:23:59.172 "data_size": 63488 00:23:59.172 }, 00:23:59.172 { 00:23:59.172 "name": "BaseBdev2", 00:23:59.172 "uuid": "8589ca56-6592-5eab-ada8-f8329c866208", 00:23:59.172 "is_configured": true, 00:23:59.172 "data_offset": 2048, 00:23:59.172 "data_size": 63488 00:23:59.172 }, 00:23:59.172 { 00:23:59.172 "name": "BaseBdev3", 00:23:59.172 "uuid": "4c8ecf00-8c17-57e5-b40d-16f04bb3c8f4", 00:23:59.172 "is_configured": true, 00:23:59.172 "data_offset": 2048, 00:23:59.172 "data_size": 63488 00:23:59.172 } 00:23:59.172 ] 00:23:59.172 }' 00:23:59.172 07:24:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:23:59.172 07:24:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:23:59.172 07:24:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:23:59.431 07:24:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:23:59.431 07:24:32 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:00.377 07:24:33 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:00.377 07:24:33 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:00.377 07:24:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:00.377 07:24:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:00.377 07:24:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:00.377 07:24:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:00.377 07:24:33 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:00.377 07:24:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:00.648 07:24:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:00.648 "name": "raid_bdev1", 00:24:00.648 "uuid": "e94804c9-1366-44dd-b547-3add67e3d097", 00:24:00.648 "strip_size_kb": 64, 00:24:00.648 "state": "online", 00:24:00.648 "raid_level": "raid5f", 00:24:00.648 "superblock": true, 00:24:00.648 "num_base_bdevs": 3, 00:24:00.648 "num_base_bdevs_discovered": 3, 00:24:00.648 "num_base_bdevs_operational": 3, 00:24:00.648 "process": { 00:24:00.648 "type": "rebuild", 00:24:00.648 "target": "spare", 00:24:00.648 "progress": { 00:24:00.648 "blocks": 86016, 00:24:00.648 "percent": 67 00:24:00.648 } 00:24:00.648 }, 00:24:00.648 "base_bdevs_list": [ 00:24:00.648 { 00:24:00.648 "name": "spare", 00:24:00.648 "uuid": "cb1e2605-1616-5ad4-9389-a70c12d04c4d", 00:24:00.648 "is_configured": true, 00:24:00.648 "data_offset": 2048, 00:24:00.648 "data_size": 63488 00:24:00.648 }, 00:24:00.648 { 00:24:00.648 "name": "BaseBdev2", 00:24:00.648 "uuid": "8589ca56-6592-5eab-ada8-f8329c866208", 00:24:00.648 "is_configured": true, 00:24:00.648 "data_offset": 2048, 00:24:00.648 "data_size": 63488 00:24:00.648 }, 00:24:00.648 { 00:24:00.648 "name": "BaseBdev3", 00:24:00.648 "uuid": "4c8ecf00-8c17-57e5-b40d-16f04bb3c8f4", 00:24:00.648 "is_configured": true, 00:24:00.648 "data_offset": 2048, 00:24:00.648 "data_size": 63488 00:24:00.648 } 00:24:00.648 ] 00:24:00.648 }' 00:24:00.648 07:24:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:00.648 07:24:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:00.648 07:24:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:00.648 07:24:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:00.648 07:24:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:01.584 07:24:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:01.584 07:24:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:01.584 07:24:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:01.584 07:24:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:01.584 07:24:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:01.584 07:24:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:01.584 07:24:35 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:01.584 07:24:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:01.843 07:24:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:01.843 "name": "raid_bdev1", 00:24:01.843 "uuid": "e94804c9-1366-44dd-b547-3add67e3d097", 00:24:01.843 "strip_size_kb": 64, 00:24:01.843 "state": "online", 00:24:01.843 "raid_level": "raid5f", 00:24:01.843 "superblock": true, 00:24:01.843 "num_base_bdevs": 3, 00:24:01.843 "num_base_bdevs_discovered": 3, 00:24:01.843 "num_base_bdevs_operational": 3, 00:24:01.843 "process": { 00:24:01.843 "type": "rebuild", 00:24:01.843 "target": "spare", 00:24:01.843 "progress": { 00:24:01.843 "blocks": 112640, 00:24:01.843 "percent": 88 00:24:01.843 } 00:24:01.843 }, 00:24:01.843 "base_bdevs_list": [ 00:24:01.843 { 00:24:01.843 "name": "spare", 00:24:01.843 "uuid": "cb1e2605-1616-5ad4-9389-a70c12d04c4d", 00:24:01.843 "is_configured": true, 00:24:01.843 "data_offset": 2048, 00:24:01.843 "data_size": 63488 00:24:01.843 }, 00:24:01.843 { 00:24:01.843 "name": "BaseBdev2", 00:24:01.843 "uuid": "8589ca56-6592-5eab-ada8-f8329c866208", 00:24:01.843 "is_configured": true, 00:24:01.843 "data_offset": 2048, 00:24:01.843 "data_size": 63488 00:24:01.843 }, 00:24:01.843 { 00:24:01.843 "name": "BaseBdev3", 00:24:01.843 "uuid": "4c8ecf00-8c17-57e5-b40d-16f04bb3c8f4", 00:24:01.843 "is_configured": true, 00:24:01.843 "data_offset": 2048, 00:24:01.843 "data_size": 63488 00:24:01.843 } 00:24:01.843 ] 00:24:01.843 }' 00:24:01.843 07:24:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:02.101 07:24:35 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:24:02.101 07:24:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:02.101 07:24:35 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:24:02.102 07:24:35 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:24:02.669 [2024-02-13 07:24:36.121322] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:24:02.669 [2024-02-13 07:24:36.121396] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:24:02.669 [2024-02-13 07:24:36.121625] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:02.927 07:24:36 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:24:02.927 07:24:36 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:24:02.927 07:24:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:02.927 07:24:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:24:02.927 07:24:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:24:02.927 07:24:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:02.927 07:24:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:02.927 07:24:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.186 07:24:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:03.186 "name": "raid_bdev1", 00:24:03.186 "uuid": "e94804c9-1366-44dd-b547-3add67e3d097", 00:24:03.186 "strip_size_kb": 64, 00:24:03.186 "state": "online", 00:24:03.186 "raid_level": "raid5f", 00:24:03.186 "superblock": true, 00:24:03.186 "num_base_bdevs": 3, 00:24:03.186 "num_base_bdevs_discovered": 3, 00:24:03.186 "num_base_bdevs_operational": 3, 00:24:03.186 "base_bdevs_list": [ 00:24:03.186 { 00:24:03.186 "name": "spare", 00:24:03.186 "uuid": "cb1e2605-1616-5ad4-9389-a70c12d04c4d", 00:24:03.186 "is_configured": true, 00:24:03.186 "data_offset": 2048, 00:24:03.186 "data_size": 63488 00:24:03.186 }, 00:24:03.186 { 00:24:03.186 "name": "BaseBdev2", 00:24:03.186 "uuid": "8589ca56-6592-5eab-ada8-f8329c866208", 00:24:03.186 "is_configured": true, 00:24:03.186 "data_offset": 2048, 00:24:03.186 "data_size": 63488 00:24:03.186 }, 00:24:03.186 { 00:24:03.186 "name": "BaseBdev3", 00:24:03.186 "uuid": "4c8ecf00-8c17-57e5-b40d-16f04bb3c8f4", 00:24:03.186 "is_configured": true, 00:24:03.186 "data_offset": 2048, 00:24:03.186 "data_size": 63488 00:24:03.186 } 00:24:03.186 ] 00:24:03.186 }' 00:24:03.186 07:24:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:03.186 07:24:36 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:24:03.186 07:24:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:03.444 07:24:36 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:24:03.444 07:24:36 -- bdev/bdev_raid.sh@660 -- # break 00:24:03.444 07:24:36 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:03.444 07:24:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:03.445 07:24:36 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:03.445 07:24:36 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:03.445 07:24:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:03.445 07:24:36 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.445 07:24:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.704 07:24:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:03.704 "name": "raid_bdev1", 00:24:03.704 "uuid": "e94804c9-1366-44dd-b547-3add67e3d097", 00:24:03.704 "strip_size_kb": 64, 00:24:03.704 "state": "online", 00:24:03.704 "raid_level": "raid5f", 00:24:03.704 "superblock": true, 00:24:03.704 "num_base_bdevs": 3, 00:24:03.704 "num_base_bdevs_discovered": 3, 00:24:03.704 "num_base_bdevs_operational": 3, 00:24:03.704 "base_bdevs_list": [ 00:24:03.704 { 00:24:03.704 "name": "spare", 00:24:03.704 "uuid": "cb1e2605-1616-5ad4-9389-a70c12d04c4d", 00:24:03.704 "is_configured": true, 00:24:03.704 "data_offset": 2048, 00:24:03.704 "data_size": 63488 00:24:03.704 }, 00:24:03.704 { 00:24:03.704 "name": "BaseBdev2", 00:24:03.704 "uuid": "8589ca56-6592-5eab-ada8-f8329c866208", 00:24:03.704 "is_configured": true, 00:24:03.704 "data_offset": 2048, 00:24:03.704 "data_size": 63488 00:24:03.704 }, 00:24:03.704 { 00:24:03.704 "name": "BaseBdev3", 00:24:03.704 "uuid": "4c8ecf00-8c17-57e5-b40d-16f04bb3c8f4", 00:24:03.704 "is_configured": true, 00:24:03.704 "data_offset": 2048, 00:24:03.704 "data_size": 63488 00:24:03.704 } 00:24:03.704 ] 00:24:03.704 }' 00:24:03.704 07:24:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:03.704 07:24:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:03.704 07:24:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:03.704 07:24:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:03.704 07:24:37 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:03.704 07:24:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:03.704 07:24:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:03.704 07:24:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:03.704 07:24:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:03.704 07:24:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:03.704 07:24:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:03.704 07:24:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:03.704 07:24:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:03.704 07:24:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:03.704 07:24:37 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:03.704 07:24:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:03.963 07:24:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:03.963 "name": "raid_bdev1", 00:24:03.963 "uuid": "e94804c9-1366-44dd-b547-3add67e3d097", 00:24:03.963 "strip_size_kb": 64, 00:24:03.963 "state": "online", 00:24:03.963 "raid_level": "raid5f", 00:24:03.963 "superblock": true, 00:24:03.963 "num_base_bdevs": 3, 00:24:03.963 "num_base_bdevs_discovered": 3, 00:24:03.963 "num_base_bdevs_operational": 3, 00:24:03.963 "base_bdevs_list": [ 00:24:03.963 { 00:24:03.963 "name": "spare", 00:24:03.963 "uuid": "cb1e2605-1616-5ad4-9389-a70c12d04c4d", 00:24:03.963 "is_configured": true, 00:24:03.963 "data_offset": 2048, 00:24:03.963 "data_size": 63488 00:24:03.963 }, 00:24:03.963 { 00:24:03.963 "name": "BaseBdev2", 00:24:03.963 "uuid": "8589ca56-6592-5eab-ada8-f8329c866208", 00:24:03.963 "is_configured": true, 00:24:03.963 "data_offset": 2048, 00:24:03.963 "data_size": 63488 00:24:03.963 }, 00:24:03.963 { 00:24:03.963 "name": "BaseBdev3", 00:24:03.963 "uuid": "4c8ecf00-8c17-57e5-b40d-16f04bb3c8f4", 00:24:03.963 "is_configured": true, 00:24:03.963 "data_offset": 2048, 00:24:03.963 "data_size": 63488 00:24:03.963 } 00:24:03.963 ] 00:24:03.963 }' 00:24:03.963 07:24:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:03.963 07:24:37 -- common/autotest_common.sh@10 -- # set +x 00:24:04.531 07:24:38 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:04.790 [2024-02-13 07:24:38.293392] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:04.790 [2024-02-13 07:24:38.293428] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:04.790 [2024-02-13 07:24:38.293537] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:04.790 [2024-02-13 07:24:38.293618] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:04.790 [2024-02-13 07:24:38.293630] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state offline 00:24:04.790 07:24:38 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:04.790 07:24:38 -- bdev/bdev_raid.sh@671 -- # jq length 00:24:05.049 07:24:38 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:24:05.049 07:24:38 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:24:05.049 07:24:38 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:24:05.049 07:24:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:05.049 07:24:38 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:24:05.049 07:24:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:05.049 07:24:38 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:24:05.049 07:24:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:05.049 07:24:38 -- bdev/nbd_common.sh@12 -- # local i 00:24:05.049 07:24:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:05.049 07:24:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:05.049 07:24:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:24:05.308 /dev/nbd0 00:24:05.308 07:24:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:05.308 07:24:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:05.308 07:24:38 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:24:05.308 07:24:38 -- common/autotest_common.sh@855 -- # local i 00:24:05.308 07:24:38 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:05.308 07:24:38 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:05.308 07:24:38 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:24:05.308 07:24:38 -- common/autotest_common.sh@859 -- # break 00:24:05.308 07:24:38 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:05.308 07:24:38 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:05.308 07:24:38 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:05.308 1+0 records in 00:24:05.308 1+0 records out 00:24:05.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116227 s, 3.5 MB/s 00:24:05.308 07:24:38 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:05.308 07:24:38 -- common/autotest_common.sh@872 -- # size=4096 00:24:05.308 07:24:38 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:05.308 07:24:38 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:05.308 07:24:38 -- common/autotest_common.sh@875 -- # return 0 00:24:05.308 07:24:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:05.308 07:24:38 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:05.308 07:24:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:24:05.568 /dev/nbd1 00:24:05.568 07:24:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:05.568 07:24:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:05.568 07:24:39 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:24:05.568 07:24:39 -- common/autotest_common.sh@855 -- # local i 00:24:05.568 07:24:39 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:24:05.568 07:24:39 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:24:05.568 07:24:39 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:24:05.568 07:24:39 -- common/autotest_common.sh@859 -- # break 00:24:05.568 07:24:39 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:24:05.568 07:24:39 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:24:05.568 07:24:39 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:05.568 1+0 records in 00:24:05.568 1+0 records out 00:24:05.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360227 s, 11.4 MB/s 00:24:05.568 07:24:39 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:05.568 07:24:39 -- common/autotest_common.sh@872 -- # size=4096 00:24:05.568 07:24:39 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:05.568 07:24:39 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:24:05.568 07:24:39 -- common/autotest_common.sh@875 -- # return 0 00:24:05.568 07:24:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:05.568 07:24:39 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:05.568 07:24:39 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:24:05.568 07:24:39 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:24:05.568 07:24:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:24:05.568 07:24:39 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:24:05.568 07:24:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:05.568 07:24:39 -- bdev/nbd_common.sh@51 -- # local i 00:24:05.568 07:24:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:05.568 07:24:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:24:05.827 07:24:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:05.827 07:24:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:05.827 07:24:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:05.827 07:24:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:05.827 07:24:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:05.827 07:24:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:05.827 07:24:39 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:06.086 07:24:39 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:06.086 07:24:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:06.086 07:24:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:06.086 07:24:39 -- bdev/nbd_common.sh@41 -- # break 00:24:06.086 07:24:39 -- bdev/nbd_common.sh@45 -- # return 0 00:24:06.086 07:24:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:06.086 07:24:39 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:24:06.344 07:24:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:06.344 07:24:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:06.344 07:24:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:06.344 07:24:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:06.344 07:24:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:06.344 07:24:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:06.344 07:24:39 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:24:06.344 07:24:39 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:24:06.344 07:24:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:06.344 07:24:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:06.344 07:24:39 -- bdev/nbd_common.sh@41 -- # break 00:24:06.344 07:24:39 -- bdev/nbd_common.sh@45 -- # return 0 00:24:06.344 07:24:39 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:24:06.344 07:24:39 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:06.344 07:24:39 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:24:06.344 07:24:39 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:24:06.604 07:24:40 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:24:06.863 [2024-02-13 07:24:40.313056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:24:06.863 [2024-02-13 07:24:40.313167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:06.863 [2024-02-13 07:24:40.313200] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:06.863 [2024-02-13 07:24:40.313226] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:06.863 [2024-02-13 07:24:40.315352] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:06.863 [2024-02-13 07:24:40.315414] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:24:06.863 [2024-02-13 07:24:40.315536] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:24:06.863 [2024-02-13 07:24:40.315596] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:06.863 BaseBdev1 00:24:06.863 07:24:40 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:06.863 07:24:40 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:24:06.863 07:24:40 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:24:07.122 07:24:40 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:24:07.122 [2024-02-13 07:24:40.777132] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:24:07.122 [2024-02-13 07:24:40.777226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.122 [2024-02-13 07:24:40.777263] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:24:07.122 [2024-02-13 07:24:40.777284] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.122 [2024-02-13 07:24:40.777853] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.122 [2024-02-13 07:24:40.777922] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:24:07.122 [2024-02-13 07:24:40.778023] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:24:07.122 [2024-02-13 07:24:40.778037] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:24:07.122 [2024-02-13 07:24:40.778044] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:07.122 [2024-02-13 07:24:40.778063] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b180 name raid_bdev1, state configuring 00:24:07.122 [2024-02-13 07:24:40.778127] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:07.122 BaseBdev2 00:24:07.122 07:24:40 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:24:07.122 07:24:40 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:24:07.122 07:24:40 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:24:07.380 07:24:40 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:24:07.638 [2024-02-13 07:24:41.205266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:24:07.638 [2024-02-13 07:24:41.205375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.638 [2024-02-13 07:24:41.205436] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:24:07.638 [2024-02-13 07:24:41.205458] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.638 [2024-02-13 07:24:41.206007] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.638 [2024-02-13 07:24:41.206090] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:24:07.638 [2024-02-13 07:24:41.206235] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:24:07.638 [2024-02-13 07:24:41.206261] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:07.638 BaseBdev3 00:24:07.638 07:24:41 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:24:07.897 07:24:41 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:24:07.897 [2024-02-13 07:24:41.589388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:24:07.897 [2024-02-13 07:24:41.589484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:07.897 [2024-02-13 07:24:41.589524] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:24:07.897 [2024-02-13 07:24:41.589552] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:07.897 [2024-02-13 07:24:41.590106] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:07.897 [2024-02-13 07:24:41.590201] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:24:07.897 [2024-02-13 07:24:41.590311] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:24:07.897 [2024-02-13 07:24:41.590339] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:24:08.156 spare 00:24:08.156 07:24:41 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:08.156 07:24:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:08.156 07:24:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:08.156 07:24:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:08.156 07:24:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:08.156 07:24:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:08.156 07:24:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:08.156 07:24:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:08.156 07:24:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:08.156 07:24:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:08.156 07:24:41 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.156 07:24:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.156 [2024-02-13 07:24:41.690449] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000b780 00:24:08.156 [2024-02-13 07:24:41.690486] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:24:08.156 [2024-02-13 07:24:41.690595] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004bb40 00:24:08.156 [2024-02-13 07:24:41.694652] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000b780 00:24:08.156 [2024-02-13 07:24:41.694675] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000b780 00:24:08.156 [2024-02-13 07:24:41.694829] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:08.156 07:24:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:08.156 "name": "raid_bdev1", 00:24:08.156 "uuid": "e94804c9-1366-44dd-b547-3add67e3d097", 00:24:08.156 "strip_size_kb": 64, 00:24:08.156 "state": "online", 00:24:08.156 "raid_level": "raid5f", 00:24:08.156 "superblock": true, 00:24:08.156 "num_base_bdevs": 3, 00:24:08.156 "num_base_bdevs_discovered": 3, 00:24:08.156 "num_base_bdevs_operational": 3, 00:24:08.156 "base_bdevs_list": [ 00:24:08.156 { 00:24:08.156 "name": "spare", 00:24:08.156 "uuid": "cb1e2605-1616-5ad4-9389-a70c12d04c4d", 00:24:08.156 "is_configured": true, 00:24:08.156 "data_offset": 2048, 00:24:08.156 "data_size": 63488 00:24:08.156 }, 00:24:08.156 { 00:24:08.156 "name": "BaseBdev2", 00:24:08.156 "uuid": "8589ca56-6592-5eab-ada8-f8329c866208", 00:24:08.156 "is_configured": true, 00:24:08.156 "data_offset": 2048, 00:24:08.156 "data_size": 63488 00:24:08.156 }, 00:24:08.156 { 00:24:08.156 "name": "BaseBdev3", 00:24:08.156 "uuid": "4c8ecf00-8c17-57e5-b40d-16f04bb3c8f4", 00:24:08.156 "is_configured": true, 00:24:08.156 "data_offset": 2048, 00:24:08.156 "data_size": 63488 00:24:08.156 } 00:24:08.156 ] 00:24:08.156 }' 00:24:08.156 07:24:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:08.156 07:24:41 -- common/autotest_common.sh@10 -- # set +x 00:24:08.722 07:24:42 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:24:08.722 07:24:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:24:08.722 07:24:42 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:24:08.722 07:24:42 -- bdev/bdev_raid.sh@185 -- # local target=none 00:24:08.722 07:24:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:24:08.722 07:24:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:08.723 07:24:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:08.980 07:24:42 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:24:08.980 "name": "raid_bdev1", 00:24:08.980 "uuid": "e94804c9-1366-44dd-b547-3add67e3d097", 00:24:08.980 "strip_size_kb": 64, 00:24:08.980 "state": "online", 00:24:08.980 "raid_level": "raid5f", 00:24:08.980 "superblock": true, 00:24:08.980 "num_base_bdevs": 3, 00:24:08.980 "num_base_bdevs_discovered": 3, 00:24:08.980 "num_base_bdevs_operational": 3, 00:24:08.980 "base_bdevs_list": [ 00:24:08.980 { 00:24:08.980 "name": "spare", 00:24:08.980 "uuid": "cb1e2605-1616-5ad4-9389-a70c12d04c4d", 00:24:08.980 "is_configured": true, 00:24:08.980 "data_offset": 2048, 00:24:08.980 "data_size": 63488 00:24:08.980 }, 00:24:08.980 { 00:24:08.980 "name": "BaseBdev2", 00:24:08.980 "uuid": "8589ca56-6592-5eab-ada8-f8329c866208", 00:24:08.980 "is_configured": true, 00:24:08.980 "data_offset": 2048, 00:24:08.980 "data_size": 63488 00:24:08.980 }, 00:24:08.980 { 00:24:08.980 "name": "BaseBdev3", 00:24:08.980 "uuid": "4c8ecf00-8c17-57e5-b40d-16f04bb3c8f4", 00:24:08.980 "is_configured": true, 00:24:08.980 "data_offset": 2048, 00:24:08.980 "data_size": 63488 00:24:08.980 } 00:24:08.980 ] 00:24:08.980 }' 00:24:08.980 07:24:42 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:24:09.239 07:24:42 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:24:09.239 07:24:42 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:24:09.239 07:24:42 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:24:09.239 07:24:42 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:09.239 07:24:42 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:24:09.498 07:24:43 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:24:09.498 07:24:43 -- bdev/bdev_raid.sh@709 -- # killprocess 134240 00:24:09.498 07:24:43 -- common/autotest_common.sh@924 -- # '[' -z 134240 ']' 00:24:09.498 07:24:43 -- common/autotest_common.sh@928 -- # kill -0 134240 00:24:09.498 07:24:43 -- common/autotest_common.sh@929 -- # uname 00:24:09.498 07:24:43 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:09.498 07:24:43 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 134240 00:24:09.498 07:24:43 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:24:09.498 07:24:43 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:24:09.498 07:24:43 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 134240' 00:24:09.498 killing process with pid 134240 00:24:09.498 07:24:43 -- common/autotest_common.sh@943 -- # kill 134240 00:24:09.498 07:24:43 -- common/autotest_common.sh@948 -- # wait 134240 00:24:09.498 Received shutdown signal, test time was about 60.000000 seconds 00:24:09.498 00:24:09.498 Latency(us) 00:24:09.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.498 =================================================================================================================== 00:24:09.498 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:09.498 [2024-02-13 07:24:43.032822] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:09.498 [2024-02-13 07:24:43.032965] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:09.498 [2024-02-13 07:24:43.033099] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:09.498 [2024-02-13 07:24:43.033137] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000b780 name raid_bdev1, state offline 00:24:09.756 [2024-02-13 07:24:43.290730] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:10.692 ************************************ 00:24:10.692 END TEST raid5f_rebuild_test_sb 00:24:10.692 ************************************ 00:24:10.692 07:24:44 -- bdev/bdev_raid.sh@711 -- # return 0 00:24:10.692 00:24:10.692 real 0m24.101s 00:24:10.692 user 0m37.355s 00:24:10.692 sys 0m2.906s 00:24:10.692 07:24:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:10.692 07:24:44 -- common/autotest_common.sh@10 -- # set +x 00:24:10.692 07:24:44 -- bdev/bdev_raid.sh@743 -- # for n in {3..4} 00:24:10.692 07:24:44 -- bdev/bdev_raid.sh@744 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:24:10.692 07:24:44 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:24:10.692 07:24:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:10.692 07:24:44 -- common/autotest_common.sh@10 -- # set +x 00:24:10.692 ************************************ 00:24:10.692 START TEST raid5f_state_function_test 00:24:10.692 ************************************ 00:24:10.692 07:24:44 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid5f 4 false 00:24:10.692 07:24:44 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:10.692 07:24:44 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@226 -- # raid_pid=134937 00:24:10.693 Process raid pid: 134937 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 134937' 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@228 -- # waitforlisten 134937 /var/tmp/spdk-raid.sock 00:24:10.693 07:24:44 -- common/autotest_common.sh@817 -- # '[' -z 134937 ']' 00:24:10.693 07:24:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:10.693 07:24:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:10.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:10.693 07:24:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:10.693 07:24:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:10.693 07:24:44 -- common/autotest_common.sh@10 -- # set +x 00:24:10.693 07:24:44 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:10.951 [2024-02-13 07:24:44.395284] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:24:10.951 [2024-02-13 07:24:44.395673] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.951 [2024-02-13 07:24:44.558640] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.209 [2024-02-13 07:24:44.749773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.467 [2024-02-13 07:24:44.928759] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:11.726 07:24:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:11.726 07:24:45 -- common/autotest_common.sh@850 -- # return 0 00:24:11.726 07:24:45 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:11.984 [2024-02-13 07:24:45.568487] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:11.984 [2024-02-13 07:24:45.568602] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:11.984 [2024-02-13 07:24:45.568617] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:11.984 [2024-02-13 07:24:45.568638] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:11.984 [2024-02-13 07:24:45.568645] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:11.984 [2024-02-13 07:24:45.568685] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:11.984 [2024-02-13 07:24:45.568695] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:11.984 [2024-02-13 07:24:45.568715] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:11.984 07:24:45 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:11.984 07:24:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:11.984 07:24:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:11.984 07:24:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:11.984 07:24:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:11.984 07:24:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:11.984 07:24:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:11.984 07:24:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:11.984 07:24:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:11.984 07:24:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:11.984 07:24:45 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:11.984 07:24:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:12.243 07:24:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:12.243 "name": "Existed_Raid", 00:24:12.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.243 "strip_size_kb": 64, 00:24:12.243 "state": "configuring", 00:24:12.243 "raid_level": "raid5f", 00:24:12.243 "superblock": false, 00:24:12.243 "num_base_bdevs": 4, 00:24:12.243 "num_base_bdevs_discovered": 0, 00:24:12.243 "num_base_bdevs_operational": 4, 00:24:12.243 "base_bdevs_list": [ 00:24:12.243 { 00:24:12.243 "name": "BaseBdev1", 00:24:12.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.243 "is_configured": false, 00:24:12.243 "data_offset": 0, 00:24:12.243 "data_size": 0 00:24:12.243 }, 00:24:12.243 { 00:24:12.243 "name": "BaseBdev2", 00:24:12.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.243 "is_configured": false, 00:24:12.243 "data_offset": 0, 00:24:12.243 "data_size": 0 00:24:12.243 }, 00:24:12.243 { 00:24:12.243 "name": "BaseBdev3", 00:24:12.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.243 "is_configured": false, 00:24:12.243 "data_offset": 0, 00:24:12.243 "data_size": 0 00:24:12.243 }, 00:24:12.243 { 00:24:12.243 "name": "BaseBdev4", 00:24:12.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:12.243 "is_configured": false, 00:24:12.243 "data_offset": 0, 00:24:12.243 "data_size": 0 00:24:12.243 } 00:24:12.243 ] 00:24:12.243 }' 00:24:12.243 07:24:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:12.243 07:24:45 -- common/autotest_common.sh@10 -- # set +x 00:24:12.818 07:24:46 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:13.077 [2024-02-13 07:24:46.708609] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:13.077 [2024-02-13 07:24:46.708709] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:24:13.077 07:24:46 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:13.336 [2024-02-13 07:24:46.888675] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:13.336 [2024-02-13 07:24:46.888816] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:13.336 [2024-02-13 07:24:46.888830] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:13.336 [2024-02-13 07:24:46.888859] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:13.336 [2024-02-13 07:24:46.888866] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:13.336 [2024-02-13 07:24:46.888908] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:13.336 [2024-02-13 07:24:46.888915] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:13.336 [2024-02-13 07:24:46.888939] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:13.336 07:24:46 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:13.595 [2024-02-13 07:24:47.166414] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:13.595 BaseBdev1 00:24:13.595 07:24:47 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:13.595 07:24:47 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:24:13.595 07:24:47 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:13.595 07:24:47 -- common/autotest_common.sh@887 -- # local i 00:24:13.595 07:24:47 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:13.595 07:24:47 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:13.595 07:24:47 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:13.854 07:24:47 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:13.854 [ 00:24:13.854 { 00:24:13.854 "name": "BaseBdev1", 00:24:13.854 "aliases": [ 00:24:13.854 "7428b875-3cf5-48df-a8e3-63859474b709" 00:24:13.854 ], 00:24:13.854 "product_name": "Malloc disk", 00:24:13.854 "block_size": 512, 00:24:13.854 "num_blocks": 65536, 00:24:13.854 "uuid": "7428b875-3cf5-48df-a8e3-63859474b709", 00:24:13.854 "assigned_rate_limits": { 00:24:13.854 "rw_ios_per_sec": 0, 00:24:13.854 "rw_mbytes_per_sec": 0, 00:24:13.854 "r_mbytes_per_sec": 0, 00:24:13.854 "w_mbytes_per_sec": 0 00:24:13.854 }, 00:24:13.854 "claimed": true, 00:24:13.854 "claim_type": "exclusive_write", 00:24:13.854 "zoned": false, 00:24:13.854 "supported_io_types": { 00:24:13.854 "read": true, 00:24:13.854 "write": true, 00:24:13.854 "unmap": true, 00:24:13.854 "write_zeroes": true, 00:24:13.854 "flush": true, 00:24:13.854 "reset": true, 00:24:13.854 "compare": false, 00:24:13.854 "compare_and_write": false, 00:24:13.854 "abort": true, 00:24:13.854 "nvme_admin": false, 00:24:13.854 "nvme_io": false 00:24:13.854 }, 00:24:13.854 "memory_domains": [ 00:24:13.854 { 00:24:13.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:13.854 "dma_device_type": 2 00:24:13.854 } 00:24:13.854 ], 00:24:13.854 "driver_specific": {} 00:24:13.854 } 00:24:13.854 ] 00:24:14.113 07:24:47 -- common/autotest_common.sh@893 -- # return 0 00:24:14.113 07:24:47 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:14.113 07:24:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:14.113 07:24:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:14.113 07:24:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:14.113 07:24:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:14.113 07:24:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:14.113 07:24:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:14.113 07:24:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:14.113 07:24:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:14.113 07:24:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:14.113 07:24:47 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:14.113 07:24:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:14.372 07:24:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:14.372 "name": "Existed_Raid", 00:24:14.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.372 "strip_size_kb": 64, 00:24:14.372 "state": "configuring", 00:24:14.372 "raid_level": "raid5f", 00:24:14.372 "superblock": false, 00:24:14.372 "num_base_bdevs": 4, 00:24:14.372 "num_base_bdevs_discovered": 1, 00:24:14.372 "num_base_bdevs_operational": 4, 00:24:14.372 "base_bdevs_list": [ 00:24:14.372 { 00:24:14.372 "name": "BaseBdev1", 00:24:14.372 "uuid": "7428b875-3cf5-48df-a8e3-63859474b709", 00:24:14.372 "is_configured": true, 00:24:14.372 "data_offset": 0, 00:24:14.372 "data_size": 65536 00:24:14.372 }, 00:24:14.372 { 00:24:14.372 "name": "BaseBdev2", 00:24:14.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.372 "is_configured": false, 00:24:14.372 "data_offset": 0, 00:24:14.372 "data_size": 0 00:24:14.372 }, 00:24:14.372 { 00:24:14.372 "name": "BaseBdev3", 00:24:14.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.372 "is_configured": false, 00:24:14.372 "data_offset": 0, 00:24:14.372 "data_size": 0 00:24:14.372 }, 00:24:14.372 { 00:24:14.372 "name": "BaseBdev4", 00:24:14.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:14.372 "is_configured": false, 00:24:14.372 "data_offset": 0, 00:24:14.372 "data_size": 0 00:24:14.372 } 00:24:14.372 ] 00:24:14.372 }' 00:24:14.372 07:24:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:14.372 07:24:47 -- common/autotest_common.sh@10 -- # set +x 00:24:14.940 07:24:48 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:14.940 [2024-02-13 07:24:48.606755] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:14.940 [2024-02-13 07:24:48.606843] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:24:14.940 07:24:48 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:24:14.940 07:24:48 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:15.199 [2024-02-13 07:24:48.846793] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:15.199 [2024-02-13 07:24:48.848832] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:15.199 [2024-02-13 07:24:48.848938] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:15.199 [2024-02-13 07:24:48.848951] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:15.199 [2024-02-13 07:24:48.848977] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:15.199 [2024-02-13 07:24:48.848985] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:15.199 [2024-02-13 07:24:48.849003] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:15.199 07:24:48 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:15.199 07:24:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:15.199 07:24:48 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:15.199 07:24:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:15.199 07:24:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:15.199 07:24:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:15.199 07:24:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:15.199 07:24:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:15.199 07:24:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:15.199 07:24:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:15.199 07:24:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:15.199 07:24:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:15.199 07:24:48 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:15.199 07:24:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:15.458 07:24:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:15.458 "name": "Existed_Raid", 00:24:15.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.458 "strip_size_kb": 64, 00:24:15.458 "state": "configuring", 00:24:15.458 "raid_level": "raid5f", 00:24:15.458 "superblock": false, 00:24:15.458 "num_base_bdevs": 4, 00:24:15.458 "num_base_bdevs_discovered": 1, 00:24:15.458 "num_base_bdevs_operational": 4, 00:24:15.458 "base_bdevs_list": [ 00:24:15.458 { 00:24:15.458 "name": "BaseBdev1", 00:24:15.458 "uuid": "7428b875-3cf5-48df-a8e3-63859474b709", 00:24:15.458 "is_configured": true, 00:24:15.458 "data_offset": 0, 00:24:15.458 "data_size": 65536 00:24:15.458 }, 00:24:15.458 { 00:24:15.458 "name": "BaseBdev2", 00:24:15.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.458 "is_configured": false, 00:24:15.458 "data_offset": 0, 00:24:15.458 "data_size": 0 00:24:15.458 }, 00:24:15.458 { 00:24:15.458 "name": "BaseBdev3", 00:24:15.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.458 "is_configured": false, 00:24:15.458 "data_offset": 0, 00:24:15.458 "data_size": 0 00:24:15.458 }, 00:24:15.458 { 00:24:15.458 "name": "BaseBdev4", 00:24:15.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:15.458 "is_configured": false, 00:24:15.458 "data_offset": 0, 00:24:15.458 "data_size": 0 00:24:15.458 } 00:24:15.458 ] 00:24:15.458 }' 00:24:15.458 07:24:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:15.458 07:24:49 -- common/autotest_common.sh@10 -- # set +x 00:24:16.394 07:24:49 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:16.394 [2024-02-13 07:24:49.950747] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:16.394 BaseBdev2 00:24:16.394 07:24:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:16.394 07:24:49 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:24:16.394 07:24:49 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:16.394 07:24:49 -- common/autotest_common.sh@887 -- # local i 00:24:16.394 07:24:49 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:16.394 07:24:49 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:16.394 07:24:49 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:16.652 07:24:50 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:16.911 [ 00:24:16.911 { 00:24:16.911 "name": "BaseBdev2", 00:24:16.911 "aliases": [ 00:24:16.911 "1150bc01-b4bd-4e0c-a800-48463a950dbe" 00:24:16.911 ], 00:24:16.911 "product_name": "Malloc disk", 00:24:16.911 "block_size": 512, 00:24:16.912 "num_blocks": 65536, 00:24:16.912 "uuid": "1150bc01-b4bd-4e0c-a800-48463a950dbe", 00:24:16.912 "assigned_rate_limits": { 00:24:16.912 "rw_ios_per_sec": 0, 00:24:16.912 "rw_mbytes_per_sec": 0, 00:24:16.912 "r_mbytes_per_sec": 0, 00:24:16.912 "w_mbytes_per_sec": 0 00:24:16.912 }, 00:24:16.912 "claimed": true, 00:24:16.912 "claim_type": "exclusive_write", 00:24:16.912 "zoned": false, 00:24:16.912 "supported_io_types": { 00:24:16.912 "read": true, 00:24:16.912 "write": true, 00:24:16.912 "unmap": true, 00:24:16.912 "write_zeroes": true, 00:24:16.912 "flush": true, 00:24:16.912 "reset": true, 00:24:16.912 "compare": false, 00:24:16.912 "compare_and_write": false, 00:24:16.912 "abort": true, 00:24:16.912 "nvme_admin": false, 00:24:16.912 "nvme_io": false 00:24:16.912 }, 00:24:16.912 "memory_domains": [ 00:24:16.912 { 00:24:16.912 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:16.912 "dma_device_type": 2 00:24:16.912 } 00:24:16.912 ], 00:24:16.912 "driver_specific": {} 00:24:16.912 } 00:24:16.912 ] 00:24:16.912 07:24:50 -- common/autotest_common.sh@893 -- # return 0 00:24:16.912 07:24:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:16.912 07:24:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:16.912 07:24:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:16.912 07:24:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:16.912 07:24:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:16.912 07:24:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:16.912 07:24:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:16.912 07:24:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:16.912 07:24:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:16.912 07:24:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:16.912 07:24:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:16.912 07:24:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:16.912 07:24:50 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:16.912 07:24:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:17.171 07:24:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:17.171 "name": "Existed_Raid", 00:24:17.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.171 "strip_size_kb": 64, 00:24:17.171 "state": "configuring", 00:24:17.171 "raid_level": "raid5f", 00:24:17.171 "superblock": false, 00:24:17.171 "num_base_bdevs": 4, 00:24:17.171 "num_base_bdevs_discovered": 2, 00:24:17.171 "num_base_bdevs_operational": 4, 00:24:17.171 "base_bdevs_list": [ 00:24:17.171 { 00:24:17.171 "name": "BaseBdev1", 00:24:17.171 "uuid": "7428b875-3cf5-48df-a8e3-63859474b709", 00:24:17.171 "is_configured": true, 00:24:17.171 "data_offset": 0, 00:24:17.171 "data_size": 65536 00:24:17.171 }, 00:24:17.171 { 00:24:17.171 "name": "BaseBdev2", 00:24:17.171 "uuid": "1150bc01-b4bd-4e0c-a800-48463a950dbe", 00:24:17.171 "is_configured": true, 00:24:17.171 "data_offset": 0, 00:24:17.171 "data_size": 65536 00:24:17.171 }, 00:24:17.171 { 00:24:17.171 "name": "BaseBdev3", 00:24:17.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.171 "is_configured": false, 00:24:17.171 "data_offset": 0, 00:24:17.171 "data_size": 0 00:24:17.171 }, 00:24:17.171 { 00:24:17.171 "name": "BaseBdev4", 00:24:17.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:17.171 "is_configured": false, 00:24:17.171 "data_offset": 0, 00:24:17.171 "data_size": 0 00:24:17.171 } 00:24:17.171 ] 00:24:17.171 }' 00:24:17.171 07:24:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:17.171 07:24:50 -- common/autotest_common.sh@10 -- # set +x 00:24:17.738 07:24:51 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:17.997 [2024-02-13 07:24:51.583165] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:17.997 BaseBdev3 00:24:17.997 07:24:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:17.997 07:24:51 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:24:17.997 07:24:51 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:17.997 07:24:51 -- common/autotest_common.sh@887 -- # local i 00:24:17.997 07:24:51 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:17.997 07:24:51 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:17.997 07:24:51 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:18.257 07:24:51 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:18.516 [ 00:24:18.516 { 00:24:18.516 "name": "BaseBdev3", 00:24:18.516 "aliases": [ 00:24:18.516 "e6075006-402f-42c1-a6c1-27ea3a2af6af" 00:24:18.516 ], 00:24:18.516 "product_name": "Malloc disk", 00:24:18.516 "block_size": 512, 00:24:18.516 "num_blocks": 65536, 00:24:18.516 "uuid": "e6075006-402f-42c1-a6c1-27ea3a2af6af", 00:24:18.516 "assigned_rate_limits": { 00:24:18.516 "rw_ios_per_sec": 0, 00:24:18.516 "rw_mbytes_per_sec": 0, 00:24:18.516 "r_mbytes_per_sec": 0, 00:24:18.516 "w_mbytes_per_sec": 0 00:24:18.516 }, 00:24:18.516 "claimed": true, 00:24:18.516 "claim_type": "exclusive_write", 00:24:18.516 "zoned": false, 00:24:18.516 "supported_io_types": { 00:24:18.516 "read": true, 00:24:18.516 "write": true, 00:24:18.516 "unmap": true, 00:24:18.516 "write_zeroes": true, 00:24:18.516 "flush": true, 00:24:18.516 "reset": true, 00:24:18.516 "compare": false, 00:24:18.516 "compare_and_write": false, 00:24:18.516 "abort": true, 00:24:18.516 "nvme_admin": false, 00:24:18.516 "nvme_io": false 00:24:18.516 }, 00:24:18.516 "memory_domains": [ 00:24:18.516 { 00:24:18.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:18.516 "dma_device_type": 2 00:24:18.516 } 00:24:18.516 ], 00:24:18.516 "driver_specific": {} 00:24:18.516 } 00:24:18.516 ] 00:24:18.516 07:24:52 -- common/autotest_common.sh@893 -- # return 0 00:24:18.516 07:24:52 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:18.516 07:24:52 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:18.516 07:24:52 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:18.516 07:24:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:18.516 07:24:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:18.516 07:24:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:18.516 07:24:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:18.516 07:24:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:18.516 07:24:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:18.516 07:24:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:18.516 07:24:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:18.516 07:24:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:18.516 07:24:52 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:18.516 07:24:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:18.775 07:24:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:18.775 "name": "Existed_Raid", 00:24:18.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.775 "strip_size_kb": 64, 00:24:18.775 "state": "configuring", 00:24:18.775 "raid_level": "raid5f", 00:24:18.775 "superblock": false, 00:24:18.775 "num_base_bdevs": 4, 00:24:18.775 "num_base_bdevs_discovered": 3, 00:24:18.775 "num_base_bdevs_operational": 4, 00:24:18.775 "base_bdevs_list": [ 00:24:18.775 { 00:24:18.775 "name": "BaseBdev1", 00:24:18.775 "uuid": "7428b875-3cf5-48df-a8e3-63859474b709", 00:24:18.775 "is_configured": true, 00:24:18.775 "data_offset": 0, 00:24:18.775 "data_size": 65536 00:24:18.775 }, 00:24:18.775 { 00:24:18.775 "name": "BaseBdev2", 00:24:18.775 "uuid": "1150bc01-b4bd-4e0c-a800-48463a950dbe", 00:24:18.775 "is_configured": true, 00:24:18.775 "data_offset": 0, 00:24:18.775 "data_size": 65536 00:24:18.775 }, 00:24:18.775 { 00:24:18.775 "name": "BaseBdev3", 00:24:18.775 "uuid": "e6075006-402f-42c1-a6c1-27ea3a2af6af", 00:24:18.775 "is_configured": true, 00:24:18.775 "data_offset": 0, 00:24:18.775 "data_size": 65536 00:24:18.775 }, 00:24:18.775 { 00:24:18.775 "name": "BaseBdev4", 00:24:18.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:18.775 "is_configured": false, 00:24:18.775 "data_offset": 0, 00:24:18.775 "data_size": 0 00:24:18.775 } 00:24:18.775 ] 00:24:18.775 }' 00:24:18.775 07:24:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:18.775 07:24:52 -- common/autotest_common.sh@10 -- # set +x 00:24:19.342 07:24:52 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:19.601 [2024-02-13 07:24:53.264289] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:19.601 [2024-02-13 07:24:53.264342] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007280 00:24:19.601 [2024-02-13 07:24:53.264352] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:24:19.601 [2024-02-13 07:24:53.264460] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000056c0 00:24:19.601 [2024-02-13 07:24:53.270212] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007280 00:24:19.601 [2024-02-13 07:24:53.270235] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007280 00:24:19.601 [2024-02-13 07:24:53.270481] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:19.601 BaseBdev4 00:24:19.601 07:24:53 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:24:19.601 07:24:53 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:24:19.601 07:24:53 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:19.601 07:24:53 -- common/autotest_common.sh@887 -- # local i 00:24:19.601 07:24:53 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:19.601 07:24:53 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:19.601 07:24:53 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:19.859 07:24:53 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:20.118 [ 00:24:20.118 { 00:24:20.118 "name": "BaseBdev4", 00:24:20.118 "aliases": [ 00:24:20.118 "02a6391c-8d4b-404b-91fa-41a0c8376857" 00:24:20.118 ], 00:24:20.118 "product_name": "Malloc disk", 00:24:20.118 "block_size": 512, 00:24:20.118 "num_blocks": 65536, 00:24:20.118 "uuid": "02a6391c-8d4b-404b-91fa-41a0c8376857", 00:24:20.118 "assigned_rate_limits": { 00:24:20.118 "rw_ios_per_sec": 0, 00:24:20.118 "rw_mbytes_per_sec": 0, 00:24:20.118 "r_mbytes_per_sec": 0, 00:24:20.118 "w_mbytes_per_sec": 0 00:24:20.118 }, 00:24:20.118 "claimed": true, 00:24:20.118 "claim_type": "exclusive_write", 00:24:20.118 "zoned": false, 00:24:20.118 "supported_io_types": { 00:24:20.118 "read": true, 00:24:20.118 "write": true, 00:24:20.118 "unmap": true, 00:24:20.118 "write_zeroes": true, 00:24:20.118 "flush": true, 00:24:20.118 "reset": true, 00:24:20.118 "compare": false, 00:24:20.118 "compare_and_write": false, 00:24:20.118 "abort": true, 00:24:20.118 "nvme_admin": false, 00:24:20.118 "nvme_io": false 00:24:20.118 }, 00:24:20.118 "memory_domains": [ 00:24:20.118 { 00:24:20.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:20.118 "dma_device_type": 2 00:24:20.118 } 00:24:20.118 ], 00:24:20.118 "driver_specific": {} 00:24:20.118 } 00:24:20.118 ] 00:24:20.118 07:24:53 -- common/autotest_common.sh@893 -- # return 0 00:24:20.118 07:24:53 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:20.118 07:24:53 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:20.118 07:24:53 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:20.118 07:24:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:20.118 07:24:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:20.118 07:24:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:20.118 07:24:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:20.118 07:24:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:20.118 07:24:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:20.118 07:24:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:20.118 07:24:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:20.118 07:24:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:20.118 07:24:53 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:20.118 07:24:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:20.376 07:24:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:20.376 "name": "Existed_Raid", 00:24:20.376 "uuid": "327e592b-6a3b-49a1-aa9a-96df0120a424", 00:24:20.376 "strip_size_kb": 64, 00:24:20.376 "state": "online", 00:24:20.376 "raid_level": "raid5f", 00:24:20.376 "superblock": false, 00:24:20.376 "num_base_bdevs": 4, 00:24:20.376 "num_base_bdevs_discovered": 4, 00:24:20.376 "num_base_bdevs_operational": 4, 00:24:20.376 "base_bdevs_list": [ 00:24:20.376 { 00:24:20.376 "name": "BaseBdev1", 00:24:20.376 "uuid": "7428b875-3cf5-48df-a8e3-63859474b709", 00:24:20.376 "is_configured": true, 00:24:20.376 "data_offset": 0, 00:24:20.376 "data_size": 65536 00:24:20.376 }, 00:24:20.376 { 00:24:20.376 "name": "BaseBdev2", 00:24:20.376 "uuid": "1150bc01-b4bd-4e0c-a800-48463a950dbe", 00:24:20.376 "is_configured": true, 00:24:20.376 "data_offset": 0, 00:24:20.376 "data_size": 65536 00:24:20.376 }, 00:24:20.376 { 00:24:20.376 "name": "BaseBdev3", 00:24:20.376 "uuid": "e6075006-402f-42c1-a6c1-27ea3a2af6af", 00:24:20.376 "is_configured": true, 00:24:20.376 "data_offset": 0, 00:24:20.376 "data_size": 65536 00:24:20.376 }, 00:24:20.376 { 00:24:20.376 "name": "BaseBdev4", 00:24:20.376 "uuid": "02a6391c-8d4b-404b-91fa-41a0c8376857", 00:24:20.376 "is_configured": true, 00:24:20.376 "data_offset": 0, 00:24:20.376 "data_size": 65536 00:24:20.376 } 00:24:20.376 ] 00:24:20.376 }' 00:24:20.376 07:24:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:20.376 07:24:53 -- common/autotest_common.sh@10 -- # set +x 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:21.312 [2024-02-13 07:24:54.904737] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:21.312 07:24:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:21.570 07:24:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:21.570 "name": "Existed_Raid", 00:24:21.570 "uuid": "327e592b-6a3b-49a1-aa9a-96df0120a424", 00:24:21.570 "strip_size_kb": 64, 00:24:21.570 "state": "online", 00:24:21.570 "raid_level": "raid5f", 00:24:21.570 "superblock": false, 00:24:21.570 "num_base_bdevs": 4, 00:24:21.570 "num_base_bdevs_discovered": 3, 00:24:21.570 "num_base_bdevs_operational": 3, 00:24:21.570 "base_bdevs_list": [ 00:24:21.570 { 00:24:21.570 "name": null, 00:24:21.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:21.570 "is_configured": false, 00:24:21.570 "data_offset": 0, 00:24:21.570 "data_size": 65536 00:24:21.570 }, 00:24:21.570 { 00:24:21.570 "name": "BaseBdev2", 00:24:21.570 "uuid": "1150bc01-b4bd-4e0c-a800-48463a950dbe", 00:24:21.570 "is_configured": true, 00:24:21.570 "data_offset": 0, 00:24:21.570 "data_size": 65536 00:24:21.570 }, 00:24:21.570 { 00:24:21.570 "name": "BaseBdev3", 00:24:21.570 "uuid": "e6075006-402f-42c1-a6c1-27ea3a2af6af", 00:24:21.570 "is_configured": true, 00:24:21.570 "data_offset": 0, 00:24:21.570 "data_size": 65536 00:24:21.570 }, 00:24:21.570 { 00:24:21.570 "name": "BaseBdev4", 00:24:21.570 "uuid": "02a6391c-8d4b-404b-91fa-41a0c8376857", 00:24:21.570 "is_configured": true, 00:24:21.570 "data_offset": 0, 00:24:21.570 "data_size": 65536 00:24:21.570 } 00:24:21.570 ] 00:24:21.570 }' 00:24:21.570 07:24:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:21.570 07:24:55 -- common/autotest_common.sh@10 -- # set +x 00:24:22.505 07:24:55 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:22.505 07:24:55 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:22.505 07:24:55 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.505 07:24:55 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:22.505 07:24:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:22.505 07:24:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:22.505 07:24:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:22.764 [2024-02-13 07:24:56.323232] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:22.764 [2024-02-13 07:24:56.323268] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:22.764 [2024-02-13 07:24:56.323351] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:22.764 07:24:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:22.764 07:24:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:22.764 07:24:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:22.764 07:24:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:23.022 07:24:56 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:23.022 07:24:56 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:23.022 07:24:56 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:23.280 [2024-02-13 07:24:56.860368] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:23.280 07:24:56 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:23.280 07:24:56 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:23.280 07:24:56 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.280 07:24:56 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:23.539 07:24:57 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:23.539 07:24:57 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:23.539 07:24:57 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:23.798 [2024-02-13 07:24:57.369385] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:23.798 [2024-02-13 07:24:57.369521] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007280 name Existed_Raid, state offline 00:24:23.798 07:24:57 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:23.798 07:24:57 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:23.798 07:24:57 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:23.798 07:24:57 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:24.056 07:24:57 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:24.057 07:24:57 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:24.057 07:24:57 -- bdev/bdev_raid.sh@287 -- # killprocess 134937 00:24:24.057 07:24:57 -- common/autotest_common.sh@924 -- # '[' -z 134937 ']' 00:24:24.057 07:24:57 -- common/autotest_common.sh@928 -- # kill -0 134937 00:24:24.057 07:24:57 -- common/autotest_common.sh@929 -- # uname 00:24:24.057 07:24:57 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:24.057 07:24:57 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 134937 00:24:24.057 killing process with pid 134937 00:24:24.057 07:24:57 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:24:24.057 07:24:57 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:24:24.057 07:24:57 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 134937' 00:24:24.057 07:24:57 -- common/autotest_common.sh@943 -- # kill 134937 00:24:24.057 07:24:57 -- common/autotest_common.sh@948 -- # wait 134937 00:24:24.057 [2024-02-13 07:24:57.699613] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:24.057 [2024-02-13 07:24:57.699761] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:25.019 ************************************ 00:24:25.019 END TEST raid5f_state_function_test 00:24:25.019 ************************************ 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:25.019 00:24:25.019 real 0m14.324s 00:24:25.019 user 0m25.790s 00:24:25.019 sys 0m1.672s 00:24:25.019 07:24:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:25.019 07:24:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@745 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:24:25.019 07:24:58 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:24:25.019 07:24:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:25.019 07:24:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.019 ************************************ 00:24:25.019 START TEST raid5f_state_function_test_sb 00:24:25.019 ************************************ 00:24:25.019 07:24:58 -- common/autotest_common.sh@1102 -- # raid_state_function_test raid5f 4 true 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid5f 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@206 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev1 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev2 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev3 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@206 -- # echo BaseBdev4 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:24:25.019 07:24:58 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:24:25.278 07:24:58 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:24:25.278 07:24:58 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:24:25.278 07:24:58 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:24:25.278 07:24:58 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:24:25.278 07:24:58 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:24:25.278 07:24:58 -- bdev/bdev_raid.sh@212 -- # '[' raid5f '!=' raid1 ']' 00:24:25.278 07:24:58 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:24:25.278 07:24:58 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:24:25.278 07:24:58 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:24:25.278 07:24:58 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:24:25.278 07:24:58 -- bdev/bdev_raid.sh@226 -- # raid_pid=135390 00:24:25.278 07:24:58 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 135390' 00:24:25.278 Process raid pid: 135390 00:24:25.278 07:24:58 -- bdev/bdev_raid.sh@225 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:24:25.278 07:24:58 -- bdev/bdev_raid.sh@228 -- # waitforlisten 135390 /var/tmp/spdk-raid.sock 00:24:25.278 07:24:58 -- common/autotest_common.sh@817 -- # '[' -z 135390 ']' 00:24:25.278 07:24:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:25.279 07:24:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:25.279 07:24:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:25.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:25.279 07:24:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:25.279 07:24:58 -- common/autotest_common.sh@10 -- # set +x 00:24:25.279 [2024-02-13 07:24:58.777917] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:24:25.279 [2024-02-13 07:24:58.778371] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.279 [2024-02-13 07:24:58.947912] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.537 [2024-02-13 07:24:59.168709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.796 [2024-02-13 07:24:59.348684] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:26.056 07:24:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:26.056 07:24:59 -- common/autotest_common.sh@850 -- # return 0 00:24:26.056 07:24:59 -- bdev/bdev_raid.sh@232 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:26.315 [2024-02-13 07:24:59.894169] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:26.315 [2024-02-13 07:24:59.894917] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:26.315 [2024-02-13 07:24:59.895059] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:26.315 [2024-02-13 07:24:59.895264] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:26.315 [2024-02-13 07:24:59.895456] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:26.315 [2024-02-13 07:24:59.895663] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:26.315 [2024-02-13 07:24:59.895783] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:26.315 [2024-02-13 07:24:59.895966] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:26.315 07:24:59 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:26.315 07:24:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:26.315 07:24:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:26.315 07:24:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:26.315 07:24:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:26.315 07:24:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:26.315 07:24:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:26.315 07:24:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:26.315 07:24:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:26.315 07:24:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:26.315 07:24:59 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:26.315 07:24:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:26.574 07:25:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:26.574 "name": "Existed_Raid", 00:24:26.574 "uuid": "eacb4009-a090-4364-9c2d-e59011ee3543", 00:24:26.574 "strip_size_kb": 64, 00:24:26.574 "state": "configuring", 00:24:26.574 "raid_level": "raid5f", 00:24:26.574 "superblock": true, 00:24:26.574 "num_base_bdevs": 4, 00:24:26.574 "num_base_bdevs_discovered": 0, 00:24:26.574 "num_base_bdevs_operational": 4, 00:24:26.574 "base_bdevs_list": [ 00:24:26.574 { 00:24:26.574 "name": "BaseBdev1", 00:24:26.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.574 "is_configured": false, 00:24:26.574 "data_offset": 0, 00:24:26.574 "data_size": 0 00:24:26.574 }, 00:24:26.574 { 00:24:26.574 "name": "BaseBdev2", 00:24:26.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.574 "is_configured": false, 00:24:26.574 "data_offset": 0, 00:24:26.574 "data_size": 0 00:24:26.574 }, 00:24:26.574 { 00:24:26.574 "name": "BaseBdev3", 00:24:26.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.574 "is_configured": false, 00:24:26.574 "data_offset": 0, 00:24:26.574 "data_size": 0 00:24:26.574 }, 00:24:26.574 { 00:24:26.574 "name": "BaseBdev4", 00:24:26.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:26.574 "is_configured": false, 00:24:26.574 "data_offset": 0, 00:24:26.574 "data_size": 0 00:24:26.574 } 00:24:26.574 ] 00:24:26.574 }' 00:24:26.574 07:25:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:26.574 07:25:00 -- common/autotest_common.sh@10 -- # set +x 00:24:27.142 07:25:00 -- bdev/bdev_raid.sh@234 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:27.401 [2024-02-13 07:25:00.998277] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:27.401 [2024-02-13 07:25:00.998530] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006680 name Existed_Raid, state configuring 00:24:27.401 07:25:01 -- bdev/bdev_raid.sh@238 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:27.660 [2024-02-13 07:25:01.178411] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:24:27.660 [2024-02-13 07:25:01.179337] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:24:27.660 [2024-02-13 07:25:01.179469] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:27.660 [2024-02-13 07:25:01.179680] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:27.660 [2024-02-13 07:25:01.179890] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:27.660 [2024-02-13 07:25:01.180103] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:27.660 [2024-02-13 07:25:01.180210] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:27.660 [2024-02-13 07:25:01.180414] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:27.660 07:25:01 -- bdev/bdev_raid.sh@239 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:27.919 [2024-02-13 07:25:01.392436] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:27.919 BaseBdev1 00:24:27.919 07:25:01 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:24:27.919 07:25:01 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:24:27.919 07:25:01 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:27.919 07:25:01 -- common/autotest_common.sh@887 -- # local i 00:24:27.919 07:25:01 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:27.919 07:25:01 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:27.919 07:25:01 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:27.919 07:25:01 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:28.178 [ 00:24:28.178 { 00:24:28.178 "name": "BaseBdev1", 00:24:28.178 "aliases": [ 00:24:28.178 "d91311a6-7280-4f46-b4c7-47e192814539" 00:24:28.178 ], 00:24:28.178 "product_name": "Malloc disk", 00:24:28.178 "block_size": 512, 00:24:28.178 "num_blocks": 65536, 00:24:28.178 "uuid": "d91311a6-7280-4f46-b4c7-47e192814539", 00:24:28.178 "assigned_rate_limits": { 00:24:28.178 "rw_ios_per_sec": 0, 00:24:28.178 "rw_mbytes_per_sec": 0, 00:24:28.178 "r_mbytes_per_sec": 0, 00:24:28.178 "w_mbytes_per_sec": 0 00:24:28.178 }, 00:24:28.178 "claimed": true, 00:24:28.178 "claim_type": "exclusive_write", 00:24:28.178 "zoned": false, 00:24:28.178 "supported_io_types": { 00:24:28.178 "read": true, 00:24:28.178 "write": true, 00:24:28.178 "unmap": true, 00:24:28.178 "write_zeroes": true, 00:24:28.178 "flush": true, 00:24:28.178 "reset": true, 00:24:28.178 "compare": false, 00:24:28.178 "compare_and_write": false, 00:24:28.178 "abort": true, 00:24:28.178 "nvme_admin": false, 00:24:28.178 "nvme_io": false 00:24:28.178 }, 00:24:28.178 "memory_domains": [ 00:24:28.178 { 00:24:28.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:28.178 "dma_device_type": 2 00:24:28.178 } 00:24:28.178 ], 00:24:28.178 "driver_specific": {} 00:24:28.178 } 00:24:28.178 ] 00:24:28.178 07:25:01 -- common/autotest_common.sh@893 -- # return 0 00:24:28.179 07:25:01 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:28.179 07:25:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:28.179 07:25:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:28.179 07:25:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:28.179 07:25:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:28.179 07:25:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:28.179 07:25:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:28.179 07:25:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:28.179 07:25:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:28.179 07:25:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:28.179 07:25:01 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:28.179 07:25:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:28.438 07:25:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:28.438 "name": "Existed_Raid", 00:24:28.438 "uuid": "0205e9a5-f76f-4f58-96e3-c1b205371670", 00:24:28.438 "strip_size_kb": 64, 00:24:28.438 "state": "configuring", 00:24:28.438 "raid_level": "raid5f", 00:24:28.438 "superblock": true, 00:24:28.438 "num_base_bdevs": 4, 00:24:28.438 "num_base_bdevs_discovered": 1, 00:24:28.438 "num_base_bdevs_operational": 4, 00:24:28.438 "base_bdevs_list": [ 00:24:28.438 { 00:24:28.438 "name": "BaseBdev1", 00:24:28.438 "uuid": "d91311a6-7280-4f46-b4c7-47e192814539", 00:24:28.438 "is_configured": true, 00:24:28.438 "data_offset": 2048, 00:24:28.438 "data_size": 63488 00:24:28.438 }, 00:24:28.438 { 00:24:28.438 "name": "BaseBdev2", 00:24:28.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.438 "is_configured": false, 00:24:28.438 "data_offset": 0, 00:24:28.438 "data_size": 0 00:24:28.438 }, 00:24:28.438 { 00:24:28.438 "name": "BaseBdev3", 00:24:28.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.438 "is_configured": false, 00:24:28.438 "data_offset": 0, 00:24:28.438 "data_size": 0 00:24:28.438 }, 00:24:28.438 { 00:24:28.438 "name": "BaseBdev4", 00:24:28.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:28.438 "is_configured": false, 00:24:28.438 "data_offset": 0, 00:24:28.438 "data_size": 0 00:24:28.438 } 00:24:28.438 ] 00:24:28.438 }' 00:24:28.438 07:25:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:28.438 07:25:02 -- common/autotest_common.sh@10 -- # set +x 00:24:29.003 07:25:02 -- bdev/bdev_raid.sh@242 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:24:29.262 [2024-02-13 07:25:02.865100] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:24:29.262 [2024-02-13 07:25:02.865363] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000006980 name Existed_Raid, state configuring 00:24:29.262 07:25:02 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:24:29.262 07:25:02 -- bdev/bdev_raid.sh@246 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:29.521 07:25:03 -- bdev/bdev_raid.sh@247 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:24:29.781 BaseBdev1 00:24:29.781 07:25:03 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:24:29.781 07:25:03 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev1 00:24:29.781 07:25:03 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:29.781 07:25:03 -- common/autotest_common.sh@887 -- # local i 00:24:29.781 07:25:03 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:29.781 07:25:03 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:29.781 07:25:03 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:30.040 07:25:03 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:24:30.299 [ 00:24:30.299 { 00:24:30.299 "name": "BaseBdev1", 00:24:30.299 "aliases": [ 00:24:30.299 "2c927e0d-99b4-46db-b965-763fd0c8805d" 00:24:30.299 ], 00:24:30.299 "product_name": "Malloc disk", 00:24:30.299 "block_size": 512, 00:24:30.299 "num_blocks": 65536, 00:24:30.299 "uuid": "2c927e0d-99b4-46db-b965-763fd0c8805d", 00:24:30.299 "assigned_rate_limits": { 00:24:30.299 "rw_ios_per_sec": 0, 00:24:30.299 "rw_mbytes_per_sec": 0, 00:24:30.299 "r_mbytes_per_sec": 0, 00:24:30.299 "w_mbytes_per_sec": 0 00:24:30.299 }, 00:24:30.299 "claimed": false, 00:24:30.299 "zoned": false, 00:24:30.299 "supported_io_types": { 00:24:30.299 "read": true, 00:24:30.299 "write": true, 00:24:30.299 "unmap": true, 00:24:30.299 "write_zeroes": true, 00:24:30.299 "flush": true, 00:24:30.299 "reset": true, 00:24:30.299 "compare": false, 00:24:30.299 "compare_and_write": false, 00:24:30.299 "abort": true, 00:24:30.299 "nvme_admin": false, 00:24:30.299 "nvme_io": false 00:24:30.299 }, 00:24:30.299 "memory_domains": [ 00:24:30.299 { 00:24:30.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:30.299 "dma_device_type": 2 00:24:30.299 } 00:24:30.299 ], 00:24:30.299 "driver_specific": {} 00:24:30.299 } 00:24:30.299 ] 00:24:30.299 07:25:03 -- common/autotest_common.sh@893 -- # return 0 00:24:30.299 07:25:03 -- bdev/bdev_raid.sh@253 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:24:30.558 [2024-02-13 07:25:04.066983] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:24:30.558 [2024-02-13 07:25:04.069134] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:24:30.558 [2024-02-13 07:25:04.069363] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:24:30.558 [2024-02-13 07:25:04.069516] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:24:30.558 [2024-02-13 07:25:04.069657] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:24:30.558 [2024-02-13 07:25:04.069756] bdev.c:8014:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:24:30.558 [2024-02-13 07:25:04.069810] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:24:30.558 07:25:04 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:24:30.558 07:25:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:30.558 07:25:04 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:30.558 07:25:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:30.558 07:25:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:30.558 07:25:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:30.558 07:25:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:30.558 07:25:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:30.558 07:25:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:30.558 07:25:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:30.558 07:25:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:30.558 07:25:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:30.558 07:25:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:30.559 07:25:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:30.817 07:25:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:30.817 "name": "Existed_Raid", 00:24:30.818 "uuid": "44f285a4-cc9d-491b-99b9-eee86c256bd3", 00:24:30.818 "strip_size_kb": 64, 00:24:30.818 "state": "configuring", 00:24:30.818 "raid_level": "raid5f", 00:24:30.818 "superblock": true, 00:24:30.818 "num_base_bdevs": 4, 00:24:30.818 "num_base_bdevs_discovered": 1, 00:24:30.818 "num_base_bdevs_operational": 4, 00:24:30.818 "base_bdevs_list": [ 00:24:30.818 { 00:24:30.818 "name": "BaseBdev1", 00:24:30.818 "uuid": "2c927e0d-99b4-46db-b965-763fd0c8805d", 00:24:30.818 "is_configured": true, 00:24:30.818 "data_offset": 2048, 00:24:30.818 "data_size": 63488 00:24:30.818 }, 00:24:30.818 { 00:24:30.818 "name": "BaseBdev2", 00:24:30.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.818 "is_configured": false, 00:24:30.818 "data_offset": 0, 00:24:30.818 "data_size": 0 00:24:30.818 }, 00:24:30.818 { 00:24:30.818 "name": "BaseBdev3", 00:24:30.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.818 "is_configured": false, 00:24:30.818 "data_offset": 0, 00:24:30.818 "data_size": 0 00:24:30.818 }, 00:24:30.818 { 00:24:30.818 "name": "BaseBdev4", 00:24:30.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:30.818 "is_configured": false, 00:24:30.818 "data_offset": 0, 00:24:30.818 "data_size": 0 00:24:30.818 } 00:24:30.818 ] 00:24:30.818 }' 00:24:30.818 07:25:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:30.818 07:25:04 -- common/autotest_common.sh@10 -- # set +x 00:24:31.386 07:25:04 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:24:31.645 [2024-02-13 07:25:05.166925] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:24:31.645 BaseBdev2 00:24:31.645 07:25:05 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:24:31.645 07:25:05 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev2 00:24:31.645 07:25:05 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:31.645 07:25:05 -- common/autotest_common.sh@887 -- # local i 00:24:31.645 07:25:05 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:31.645 07:25:05 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:31.645 07:25:05 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:31.904 07:25:05 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:24:31.904 [ 00:24:31.904 { 00:24:31.904 "name": "BaseBdev2", 00:24:31.904 "aliases": [ 00:24:31.904 "b724ae8d-39f6-4b1b-b90d-0fbfa2a9bc57" 00:24:31.904 ], 00:24:31.904 "product_name": "Malloc disk", 00:24:31.904 "block_size": 512, 00:24:31.904 "num_blocks": 65536, 00:24:31.904 "uuid": "b724ae8d-39f6-4b1b-b90d-0fbfa2a9bc57", 00:24:31.904 "assigned_rate_limits": { 00:24:31.904 "rw_ios_per_sec": 0, 00:24:31.904 "rw_mbytes_per_sec": 0, 00:24:31.904 "r_mbytes_per_sec": 0, 00:24:31.904 "w_mbytes_per_sec": 0 00:24:31.904 }, 00:24:31.904 "claimed": true, 00:24:31.904 "claim_type": "exclusive_write", 00:24:31.904 "zoned": false, 00:24:31.904 "supported_io_types": { 00:24:31.904 "read": true, 00:24:31.904 "write": true, 00:24:31.904 "unmap": true, 00:24:31.904 "write_zeroes": true, 00:24:31.904 "flush": true, 00:24:31.904 "reset": true, 00:24:31.904 "compare": false, 00:24:31.904 "compare_and_write": false, 00:24:31.904 "abort": true, 00:24:31.904 "nvme_admin": false, 00:24:31.904 "nvme_io": false 00:24:31.904 }, 00:24:31.904 "memory_domains": [ 00:24:31.904 { 00:24:31.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:31.904 "dma_device_type": 2 00:24:31.904 } 00:24:31.904 ], 00:24:31.904 "driver_specific": {} 00:24:31.904 } 00:24:31.904 ] 00:24:31.904 07:25:05 -- common/autotest_common.sh@893 -- # return 0 00:24:31.904 07:25:05 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:31.904 07:25:05 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:31.904 07:25:05 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:31.904 07:25:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:31.904 07:25:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:31.904 07:25:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:31.904 07:25:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:31.904 07:25:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:31.904 07:25:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:31.904 07:25:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:31.904 07:25:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:31.904 07:25:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:31.904 07:25:05 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:31.904 07:25:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:32.164 07:25:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:32.164 "name": "Existed_Raid", 00:24:32.164 "uuid": "44f285a4-cc9d-491b-99b9-eee86c256bd3", 00:24:32.164 "strip_size_kb": 64, 00:24:32.164 "state": "configuring", 00:24:32.164 "raid_level": "raid5f", 00:24:32.164 "superblock": true, 00:24:32.164 "num_base_bdevs": 4, 00:24:32.164 "num_base_bdevs_discovered": 2, 00:24:32.164 "num_base_bdevs_operational": 4, 00:24:32.164 "base_bdevs_list": [ 00:24:32.164 { 00:24:32.164 "name": "BaseBdev1", 00:24:32.164 "uuid": "2c927e0d-99b4-46db-b965-763fd0c8805d", 00:24:32.164 "is_configured": true, 00:24:32.164 "data_offset": 2048, 00:24:32.164 "data_size": 63488 00:24:32.164 }, 00:24:32.164 { 00:24:32.164 "name": "BaseBdev2", 00:24:32.164 "uuid": "b724ae8d-39f6-4b1b-b90d-0fbfa2a9bc57", 00:24:32.164 "is_configured": true, 00:24:32.164 "data_offset": 2048, 00:24:32.164 "data_size": 63488 00:24:32.164 }, 00:24:32.164 { 00:24:32.164 "name": "BaseBdev3", 00:24:32.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.164 "is_configured": false, 00:24:32.164 "data_offset": 0, 00:24:32.164 "data_size": 0 00:24:32.164 }, 00:24:32.164 { 00:24:32.164 "name": "BaseBdev4", 00:24:32.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:32.164 "is_configured": false, 00:24:32.164 "data_offset": 0, 00:24:32.164 "data_size": 0 00:24:32.164 } 00:24:32.164 ] 00:24:32.164 }' 00:24:32.164 07:25:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:32.164 07:25:05 -- common/autotest_common.sh@10 -- # set +x 00:24:32.732 07:25:06 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:24:32.991 [2024-02-13 07:25:06.635085] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:24:32.991 BaseBdev3 00:24:32.991 07:25:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:24:32.991 07:25:06 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev3 00:24:32.991 07:25:06 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:32.991 07:25:06 -- common/autotest_common.sh@887 -- # local i 00:24:32.991 07:25:06 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:32.991 07:25:06 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:32.991 07:25:06 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:33.250 07:25:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:24:33.509 [ 00:24:33.509 { 00:24:33.509 "name": "BaseBdev3", 00:24:33.509 "aliases": [ 00:24:33.509 "f68e177e-c902-4de3-9dd6-cc0e65dff076" 00:24:33.509 ], 00:24:33.509 "product_name": "Malloc disk", 00:24:33.509 "block_size": 512, 00:24:33.509 "num_blocks": 65536, 00:24:33.509 "uuid": "f68e177e-c902-4de3-9dd6-cc0e65dff076", 00:24:33.509 "assigned_rate_limits": { 00:24:33.509 "rw_ios_per_sec": 0, 00:24:33.509 "rw_mbytes_per_sec": 0, 00:24:33.509 "r_mbytes_per_sec": 0, 00:24:33.509 "w_mbytes_per_sec": 0 00:24:33.509 }, 00:24:33.509 "claimed": true, 00:24:33.509 "claim_type": "exclusive_write", 00:24:33.509 "zoned": false, 00:24:33.509 "supported_io_types": { 00:24:33.509 "read": true, 00:24:33.509 "write": true, 00:24:33.509 "unmap": true, 00:24:33.509 "write_zeroes": true, 00:24:33.509 "flush": true, 00:24:33.509 "reset": true, 00:24:33.509 "compare": false, 00:24:33.509 "compare_and_write": false, 00:24:33.509 "abort": true, 00:24:33.509 "nvme_admin": false, 00:24:33.509 "nvme_io": false 00:24:33.509 }, 00:24:33.509 "memory_domains": [ 00:24:33.509 { 00:24:33.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:33.509 "dma_device_type": 2 00:24:33.509 } 00:24:33.509 ], 00:24:33.509 "driver_specific": {} 00:24:33.509 } 00:24:33.509 ] 00:24:33.509 07:25:07 -- common/autotest_common.sh@893 -- # return 0 00:24:33.509 07:25:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:33.509 07:25:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:33.509 07:25:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:24:33.509 07:25:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:33.509 07:25:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:33.509 07:25:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:33.509 07:25:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:33.509 07:25:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:33.509 07:25:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:33.509 07:25:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:33.509 07:25:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:33.509 07:25:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:33.509 07:25:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:33.509 07:25:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:33.768 07:25:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:33.768 "name": "Existed_Raid", 00:24:33.768 "uuid": "44f285a4-cc9d-491b-99b9-eee86c256bd3", 00:24:33.768 "strip_size_kb": 64, 00:24:33.768 "state": "configuring", 00:24:33.768 "raid_level": "raid5f", 00:24:33.768 "superblock": true, 00:24:33.768 "num_base_bdevs": 4, 00:24:33.768 "num_base_bdevs_discovered": 3, 00:24:33.768 "num_base_bdevs_operational": 4, 00:24:33.768 "base_bdevs_list": [ 00:24:33.768 { 00:24:33.768 "name": "BaseBdev1", 00:24:33.768 "uuid": "2c927e0d-99b4-46db-b965-763fd0c8805d", 00:24:33.768 "is_configured": true, 00:24:33.768 "data_offset": 2048, 00:24:33.768 "data_size": 63488 00:24:33.768 }, 00:24:33.768 { 00:24:33.768 "name": "BaseBdev2", 00:24:33.768 "uuid": "b724ae8d-39f6-4b1b-b90d-0fbfa2a9bc57", 00:24:33.768 "is_configured": true, 00:24:33.768 "data_offset": 2048, 00:24:33.768 "data_size": 63488 00:24:33.768 }, 00:24:33.768 { 00:24:33.768 "name": "BaseBdev3", 00:24:33.768 "uuid": "f68e177e-c902-4de3-9dd6-cc0e65dff076", 00:24:33.768 "is_configured": true, 00:24:33.768 "data_offset": 2048, 00:24:33.768 "data_size": 63488 00:24:33.768 }, 00:24:33.768 { 00:24:33.768 "name": "BaseBdev4", 00:24:33.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:33.768 "is_configured": false, 00:24:33.768 "data_offset": 0, 00:24:33.768 "data_size": 0 00:24:33.768 } 00:24:33.768 ] 00:24:33.768 }' 00:24:33.768 07:25:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:33.768 07:25:07 -- common/autotest_common.sh@10 -- # set +x 00:24:34.336 07:25:07 -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:24:34.595 [2024-02-13 07:25:08.178846] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:24:34.595 [2024-02-13 07:25:08.179370] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000007880 00:24:34.595 [2024-02-13 07:25:08.179490] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:34.595 BaseBdev4 00:24:34.595 [2024-02-13 07:25:08.179660] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005790 00:24:34.595 [2024-02-13 07:25:08.185958] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000007880 00:24:34.595 [2024-02-13 07:25:08.186099] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x616000007880 00:24:34.595 [2024-02-13 07:25:08.186357] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:34.595 07:25:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:24:34.595 07:25:08 -- common/autotest_common.sh@885 -- # local bdev_name=BaseBdev4 00:24:34.595 07:25:08 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:24:34.595 07:25:08 -- common/autotest_common.sh@887 -- # local i 00:24:34.595 07:25:08 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:24:34.595 07:25:08 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:24:34.595 07:25:08 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:24:34.854 07:25:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:24:35.113 [ 00:24:35.113 { 00:24:35.113 "name": "BaseBdev4", 00:24:35.113 "aliases": [ 00:24:35.113 "25026d2d-2357-4d44-8fa2-45fc9e612060" 00:24:35.113 ], 00:24:35.113 "product_name": "Malloc disk", 00:24:35.113 "block_size": 512, 00:24:35.113 "num_blocks": 65536, 00:24:35.113 "uuid": "25026d2d-2357-4d44-8fa2-45fc9e612060", 00:24:35.113 "assigned_rate_limits": { 00:24:35.113 "rw_ios_per_sec": 0, 00:24:35.113 "rw_mbytes_per_sec": 0, 00:24:35.113 "r_mbytes_per_sec": 0, 00:24:35.113 "w_mbytes_per_sec": 0 00:24:35.113 }, 00:24:35.113 "claimed": true, 00:24:35.113 "claim_type": "exclusive_write", 00:24:35.113 "zoned": false, 00:24:35.113 "supported_io_types": { 00:24:35.113 "read": true, 00:24:35.113 "write": true, 00:24:35.113 "unmap": true, 00:24:35.113 "write_zeroes": true, 00:24:35.113 "flush": true, 00:24:35.113 "reset": true, 00:24:35.113 "compare": false, 00:24:35.113 "compare_and_write": false, 00:24:35.113 "abort": true, 00:24:35.113 "nvme_admin": false, 00:24:35.113 "nvme_io": false 00:24:35.113 }, 00:24:35.113 "memory_domains": [ 00:24:35.113 { 00:24:35.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:24:35.113 "dma_device_type": 2 00:24:35.113 } 00:24:35.113 ], 00:24:35.113 "driver_specific": {} 00:24:35.113 } 00:24:35.113 ] 00:24:35.113 07:25:08 -- common/autotest_common.sh@893 -- # return 0 00:24:35.113 07:25:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:24:35.113 07:25:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:24:35.113 07:25:08 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:24:35.113 07:25:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:35.113 07:25:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:35.113 07:25:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:35.113 07:25:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:35.113 07:25:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:35.113 07:25:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:35.113 07:25:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:35.113 07:25:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:35.113 07:25:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:35.113 07:25:08 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:35.113 07:25:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:35.113 07:25:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:35.113 "name": "Existed_Raid", 00:24:35.113 "uuid": "44f285a4-cc9d-491b-99b9-eee86c256bd3", 00:24:35.113 "strip_size_kb": 64, 00:24:35.113 "state": "online", 00:24:35.113 "raid_level": "raid5f", 00:24:35.113 "superblock": true, 00:24:35.113 "num_base_bdevs": 4, 00:24:35.113 "num_base_bdevs_discovered": 4, 00:24:35.113 "num_base_bdevs_operational": 4, 00:24:35.113 "base_bdevs_list": [ 00:24:35.113 { 00:24:35.113 "name": "BaseBdev1", 00:24:35.113 "uuid": "2c927e0d-99b4-46db-b965-763fd0c8805d", 00:24:35.113 "is_configured": true, 00:24:35.113 "data_offset": 2048, 00:24:35.113 "data_size": 63488 00:24:35.113 }, 00:24:35.113 { 00:24:35.113 "name": "BaseBdev2", 00:24:35.113 "uuid": "b724ae8d-39f6-4b1b-b90d-0fbfa2a9bc57", 00:24:35.113 "is_configured": true, 00:24:35.113 "data_offset": 2048, 00:24:35.113 "data_size": 63488 00:24:35.113 }, 00:24:35.113 { 00:24:35.113 "name": "BaseBdev3", 00:24:35.113 "uuid": "f68e177e-c902-4de3-9dd6-cc0e65dff076", 00:24:35.113 "is_configured": true, 00:24:35.113 "data_offset": 2048, 00:24:35.113 "data_size": 63488 00:24:35.113 }, 00:24:35.113 { 00:24:35.113 "name": "BaseBdev4", 00:24:35.113 "uuid": "25026d2d-2357-4d44-8fa2-45fc9e612060", 00:24:35.113 "is_configured": true, 00:24:35.113 "data_offset": 2048, 00:24:35.113 "data_size": 63488 00:24:35.113 } 00:24:35.113 ] 00:24:35.113 }' 00:24:35.113 07:25:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:35.113 07:25:08 -- common/autotest_common.sh@10 -- # set +x 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@262 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:24:36.049 [2024-02-13 07:25:09.638110] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid5f 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:36.049 07:25:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:24:36.310 07:25:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:36.310 "name": "Existed_Raid", 00:24:36.310 "uuid": "44f285a4-cc9d-491b-99b9-eee86c256bd3", 00:24:36.310 "strip_size_kb": 64, 00:24:36.310 "state": "online", 00:24:36.310 "raid_level": "raid5f", 00:24:36.310 "superblock": true, 00:24:36.310 "num_base_bdevs": 4, 00:24:36.310 "num_base_bdevs_discovered": 3, 00:24:36.310 "num_base_bdevs_operational": 3, 00:24:36.310 "base_bdevs_list": [ 00:24:36.310 { 00:24:36.310 "name": null, 00:24:36.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:36.310 "is_configured": false, 00:24:36.310 "data_offset": 2048, 00:24:36.310 "data_size": 63488 00:24:36.310 }, 00:24:36.310 { 00:24:36.310 "name": "BaseBdev2", 00:24:36.310 "uuid": "b724ae8d-39f6-4b1b-b90d-0fbfa2a9bc57", 00:24:36.310 "is_configured": true, 00:24:36.310 "data_offset": 2048, 00:24:36.310 "data_size": 63488 00:24:36.310 }, 00:24:36.310 { 00:24:36.310 "name": "BaseBdev3", 00:24:36.310 "uuid": "f68e177e-c902-4de3-9dd6-cc0e65dff076", 00:24:36.310 "is_configured": true, 00:24:36.310 "data_offset": 2048, 00:24:36.310 "data_size": 63488 00:24:36.310 }, 00:24:36.310 { 00:24:36.310 "name": "BaseBdev4", 00:24:36.310 "uuid": "25026d2d-2357-4d44-8fa2-45fc9e612060", 00:24:36.310 "is_configured": true, 00:24:36.310 "data_offset": 2048, 00:24:36.310 "data_size": 63488 00:24:36.310 } 00:24:36.310 ] 00:24:36.310 }' 00:24:36.310 07:25:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:36.310 07:25:09 -- common/autotest_common.sh@10 -- # set +x 00:24:37.273 07:25:10 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:24:37.273 07:25:10 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:37.273 07:25:10 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.273 07:25:10 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:37.273 07:25:10 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:37.273 07:25:10 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:37.273 07:25:10 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:24:37.532 [2024-02-13 07:25:11.062278] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:24:37.532 [2024-02-13 07:25:11.062432] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:37.532 [2024-02-13 07:25:11.062606] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:37.532 07:25:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:37.532 07:25:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:37.532 07:25:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:37.532 07:25:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:37.791 07:25:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:37.791 07:25:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:37.791 07:25:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:24:38.049 [2024-02-13 07:25:11.522112] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:24:38.049 07:25:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:38.049 07:25:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:38.049 07:25:11 -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.049 07:25:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:24:38.308 07:25:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:24:38.308 07:25:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:24:38.308 07:25:11 -- bdev/bdev_raid.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:24:38.567 [2024-02-13 07:25:12.006609] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:24:38.567 [2024-02-13 07:25:12.006812] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000007880 name Existed_Raid, state offline 00:24:38.567 07:25:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:24:38.567 07:25:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:24:38.567 07:25:12 -- bdev/bdev_raid.sh@281 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:38.567 07:25:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:24:38.826 07:25:12 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:24:38.826 07:25:12 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:24:38.826 07:25:12 -- bdev/bdev_raid.sh@287 -- # killprocess 135390 00:24:38.826 07:25:12 -- common/autotest_common.sh@924 -- # '[' -z 135390 ']' 00:24:38.826 07:25:12 -- common/autotest_common.sh@928 -- # kill -0 135390 00:24:38.826 07:25:12 -- common/autotest_common.sh@929 -- # uname 00:24:38.826 07:25:12 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:38.826 07:25:12 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 135390 00:24:38.826 killing process with pid 135390 00:24:38.826 07:25:12 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:24:38.826 07:25:12 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:24:38.826 07:25:12 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 135390' 00:24:38.826 07:25:12 -- common/autotest_common.sh@943 -- # kill 135390 00:24:38.826 07:25:12 -- common/autotest_common.sh@948 -- # wait 135390 00:24:38.826 [2024-02-13 07:25:12.305523] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:38.826 [2024-02-13 07:25:12.305657] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:24:39.762 ************************************ 00:24:39.762 END TEST raid5f_state_function_test_sb 00:24:39.762 ************************************ 00:24:39.762 07:25:13 -- bdev/bdev_raid.sh@289 -- # return 0 00:24:39.762 00:24:39.762 real 0m14.594s 00:24:39.762 user 0m26.144s 00:24:39.762 sys 0m1.637s 00:24:39.762 07:25:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:39.762 07:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.762 07:25:13 -- bdev/bdev_raid.sh@746 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:24:39.762 07:25:13 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:24:39.762 07:25:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:24:39.762 07:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.762 ************************************ 00:24:39.762 START TEST raid5f_superblock_test 00:24:39.762 ************************************ 00:24:39.762 07:25:13 -- common/autotest_common.sh@1102 -- # raid_superblock_test raid5f 4 00:24:39.762 07:25:13 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid5f 00:24:39.762 07:25:13 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:24:39.762 07:25:13 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:24:39.762 07:25:13 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:24:39.762 07:25:13 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:24:39.762 07:25:13 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:24:39.762 07:25:13 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:24:39.762 07:25:13 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:24:39.762 07:25:13 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:24:39.762 07:25:13 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:24:39.762 07:25:13 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:24:39.762 07:25:13 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:24:39.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:24:39.762 07:25:13 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:24:39.763 07:25:13 -- bdev/bdev_raid.sh@349 -- # '[' raid5f '!=' raid1 ']' 00:24:39.763 07:25:13 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:24:39.763 07:25:13 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:24:39.763 07:25:13 -- bdev/bdev_raid.sh@357 -- # raid_pid=135862 00:24:39.763 07:25:13 -- bdev/bdev_raid.sh@358 -- # waitforlisten 135862 /var/tmp/spdk-raid.sock 00:24:39.763 07:25:13 -- common/autotest_common.sh@817 -- # '[' -z 135862 ']' 00:24:39.763 07:25:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:24:39.763 07:25:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:39.763 07:25:13 -- bdev/bdev_raid.sh@356 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:24:39.763 07:25:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:24:39.763 07:25:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:39.763 07:25:13 -- common/autotest_common.sh@10 -- # set +x 00:24:39.763 [2024-02-13 07:25:13.420202] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:24:39.763 [2024-02-13 07:25:13.420372] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid135862 ] 00:24:40.021 [2024-02-13 07:25:13.563531] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.279 [2024-02-13 07:25:13.743456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.279 [2024-02-13 07:25:13.920183] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:24:40.846 07:25:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:40.846 07:25:14 -- common/autotest_common.sh@850 -- # return 0 00:24:40.846 07:25:14 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:24:40.846 07:25:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:40.846 07:25:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:24:40.846 07:25:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:24:40.846 07:25:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:24:40.846 07:25:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:40.846 07:25:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:40.846 07:25:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:40.846 07:25:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:24:41.105 malloc1 00:24:41.105 07:25:14 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:41.363 [2024-02-13 07:25:14.800143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:41.363 [2024-02-13 07:25:14.800238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:41.364 [2024-02-13 07:25:14.800272] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:24:41.364 [2024-02-13 07:25:14.800317] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:41.364 [2024-02-13 07:25:14.802560] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:41.364 [2024-02-13 07:25:14.802607] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:41.364 pt1 00:24:41.364 07:25:14 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:41.364 07:25:14 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:41.364 07:25:14 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:24:41.364 07:25:14 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:24:41.364 07:25:14 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:24:41.364 07:25:14 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:41.364 07:25:14 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:41.364 07:25:14 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:41.364 07:25:14 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:24:41.364 malloc2 00:24:41.364 07:25:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:41.622 [2024-02-13 07:25:15.263903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:41.622 [2024-02-13 07:25:15.263980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:41.622 [2024-02-13 07:25:15.264027] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:24:41.622 [2024-02-13 07:25:15.264083] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:41.622 [2024-02-13 07:25:15.266247] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:41.622 [2024-02-13 07:25:15.266296] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:41.623 pt2 00:24:41.623 07:25:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:41.623 07:25:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:41.623 07:25:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:24:41.623 07:25:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:24:41.623 07:25:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:24:41.623 07:25:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:41.623 07:25:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:41.623 07:25:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:41.623 07:25:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:24:41.881 malloc3 00:24:41.881 07:25:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:42.140 [2024-02-13 07:25:15.640524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:42.140 [2024-02-13 07:25:15.640602] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:42.140 [2024-02-13 07:25:15.640652] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:24:42.140 [2024-02-13 07:25:15.640693] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:42.140 [2024-02-13 07:25:15.642587] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:42.140 [2024-02-13 07:25:15.642637] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:42.140 pt3 00:24:42.140 07:25:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:42.140 07:25:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:42.140 07:25:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:24:42.140 07:25:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:24:42.140 07:25:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:24:42.140 07:25:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:24:42.140 07:25:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:24:42.140 07:25:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:24:42.140 07:25:15 -- bdev/bdev_raid.sh@370 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:24:42.399 malloc4 00:24:42.399 07:25:15 -- bdev/bdev_raid.sh@371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:42.399 [2024-02-13 07:25:16.047543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:42.399 [2024-02-13 07:25:16.047636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:42.399 [2024-02-13 07:25:16.047682] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:24:42.399 [2024-02-13 07:25:16.047722] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:42.399 [2024-02-13 07:25:16.049583] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:42.399 [2024-02-13 07:25:16.049631] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:42.399 pt4 00:24:42.399 07:25:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:24:42.399 07:25:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:24:42.399 07:25:16 -- bdev/bdev_raid.sh@375 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:24:42.658 [2024-02-13 07:25:16.255629] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:42.658 [2024-02-13 07:25:16.257427] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:42.658 [2024-02-13 07:25:16.257507] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:42.658 [2024-02-13 07:25:16.257604] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:42.658 [2024-02-13 07:25:16.257810] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009680 00:24:42.658 [2024-02-13 07:25:16.257824] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:42.658 [2024-02-13 07:25:16.257929] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:24:42.658 [2024-02-13 07:25:16.263334] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009680 00:24:42.658 [2024-02-13 07:25:16.263357] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009680 00:24:42.658 [2024-02-13 07:25:16.263503] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:42.658 07:25:16 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:42.658 07:25:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:42.658 07:25:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:42.658 07:25:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:42.658 07:25:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:42.658 07:25:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:42.658 07:25:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:42.658 07:25:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:42.658 07:25:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:42.658 07:25:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:42.658 07:25:16 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:42.658 07:25:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:42.917 07:25:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:42.917 "name": "raid_bdev1", 00:24:42.917 "uuid": "4329e872-a637-4017-8bd3-006e1e4c5a7d", 00:24:42.917 "strip_size_kb": 64, 00:24:42.917 "state": "online", 00:24:42.917 "raid_level": "raid5f", 00:24:42.917 "superblock": true, 00:24:42.917 "num_base_bdevs": 4, 00:24:42.917 "num_base_bdevs_discovered": 4, 00:24:42.917 "num_base_bdevs_operational": 4, 00:24:42.917 "base_bdevs_list": [ 00:24:42.917 { 00:24:42.917 "name": "pt1", 00:24:42.917 "uuid": "5fbdc192-47b7-5f57-beec-cba2714877a1", 00:24:42.917 "is_configured": true, 00:24:42.917 "data_offset": 2048, 00:24:42.917 "data_size": 63488 00:24:42.917 }, 00:24:42.917 { 00:24:42.917 "name": "pt2", 00:24:42.917 "uuid": "e71fb2cb-7040-566a-b269-89fbeb90ac30", 00:24:42.917 "is_configured": true, 00:24:42.917 "data_offset": 2048, 00:24:42.917 "data_size": 63488 00:24:42.917 }, 00:24:42.917 { 00:24:42.917 "name": "pt3", 00:24:42.917 "uuid": "73c0f665-4dd9-5794-b2fa-506ed91bf585", 00:24:42.917 "is_configured": true, 00:24:42.917 "data_offset": 2048, 00:24:42.917 "data_size": 63488 00:24:42.917 }, 00:24:42.917 { 00:24:42.917 "name": "pt4", 00:24:42.917 "uuid": "d079d12f-d230-51d0-ad08-b62249860eac", 00:24:42.917 "is_configured": true, 00:24:42.917 "data_offset": 2048, 00:24:42.917 "data_size": 63488 00:24:42.917 } 00:24:42.917 ] 00:24:42.917 }' 00:24:42.917 07:25:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:42.917 07:25:16 -- common/autotest_common.sh@10 -- # set +x 00:24:43.485 07:25:17 -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:43.485 07:25:17 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:24:43.744 [2024-02-13 07:25:17.341593] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:43.744 07:25:17 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=4329e872-a637-4017-8bd3-006e1e4c5a7d 00:24:43.744 07:25:17 -- bdev/bdev_raid.sh@380 -- # '[' -z 4329e872-a637-4017-8bd3-006e1e4c5a7d ']' 00:24:43.744 07:25:17 -- bdev/bdev_raid.sh@385 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:44.003 [2024-02-13 07:25:17.525487] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:44.003 [2024-02-13 07:25:17.525511] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:44.003 [2024-02-13 07:25:17.525576] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:44.003 [2024-02-13 07:25:17.525667] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:44.003 [2024-02-13 07:25:17.525680] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009680 name raid_bdev1, state offline 00:24:44.003 07:25:17 -- bdev/bdev_raid.sh@386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:44.003 07:25:17 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:24:44.262 07:25:17 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:24:44.262 07:25:17 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:24:44.262 07:25:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:44.262 07:25:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:44.262 07:25:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:44.262 07:25:17 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:44.521 07:25:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:44.521 07:25:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:44.780 07:25:18 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:24:44.780 07:25:18 -- bdev/bdev_raid.sh@393 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:45.039 07:25:18 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:24:45.039 07:25:18 -- bdev/bdev_raid.sh@395 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:24:45.297 07:25:18 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:24:45.297 07:25:18 -- bdev/bdev_raid.sh@401 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:45.297 07:25:18 -- common/autotest_common.sh@638 -- # local es=0 00:24:45.297 07:25:18 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:45.297 07:25:18 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:45.297 07:25:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:45.297 07:25:18 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:45.297 07:25:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:45.297 07:25:18 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:45.297 07:25:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:45.297 07:25:18 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:45.297 07:25:18 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:24:45.297 07:25:18 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:24:45.297 [2024-02-13 07:25:18.969784] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:24:45.297 [2024-02-13 07:25:18.971372] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:24:45.297 [2024-02-13 07:25:18.971444] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:24:45.297 [2024-02-13 07:25:18.971488] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:24:45.298 [2024-02-13 07:25:18.971537] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:24:45.298 [2024-02-13 07:25:18.971610] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:24:45.298 [2024-02-13 07:25:18.971644] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:24:45.298 [2024-02-13 07:25:18.971694] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:24:45.298 [2024-02-13 07:25:18.971717] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:45.298 [2024-02-13 07:25:18.971725] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009c80 name raid_bdev1, state configuring 00:24:45.298 request: 00:24:45.298 { 00:24:45.298 "name": "raid_bdev1", 00:24:45.298 "raid_level": "raid5f", 00:24:45.298 "base_bdevs": [ 00:24:45.298 "malloc1", 00:24:45.298 "malloc2", 00:24:45.298 "malloc3", 00:24:45.298 "malloc4" 00:24:45.298 ], 00:24:45.298 "superblock": false, 00:24:45.298 "strip_size_kb": 64, 00:24:45.298 "method": "bdev_raid_create", 00:24:45.298 "req_id": 1 00:24:45.298 } 00:24:45.298 Got JSON-RPC error response 00:24:45.298 response: 00:24:45.298 { 00:24:45.298 "code": -17, 00:24:45.298 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:24:45.298 } 00:24:45.298 07:25:18 -- common/autotest_common.sh@641 -- # es=1 00:24:45.298 07:25:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:45.298 07:25:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:45.298 07:25:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:45.298 07:25:18 -- bdev/bdev_raid.sh@403 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.298 07:25:18 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:24:45.556 07:25:19 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:24:45.556 07:25:19 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:24:45.556 07:25:19 -- bdev/bdev_raid.sh@409 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:45.815 [2024-02-13 07:25:19.381801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:45.815 [2024-02-13 07:25:19.381859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:45.815 [2024-02-13 07:25:19.381889] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:24:45.815 [2024-02-13 07:25:19.381912] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:45.815 [2024-02-13 07:25:19.383680] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:45.815 [2024-02-13 07:25:19.383740] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:45.815 [2024-02-13 07:25:19.383827] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:45.815 [2024-02-13 07:25:19.383891] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:45.815 pt1 00:24:45.815 07:25:19 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:45.815 07:25:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:45.815 07:25:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:45.815 07:25:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:45.815 07:25:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:45.815 07:25:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:45.815 07:25:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:45.815 07:25:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:45.815 07:25:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:45.815 07:25:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:45.815 07:25:19 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:45.815 07:25:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:46.073 07:25:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:46.073 "name": "raid_bdev1", 00:24:46.073 "uuid": "4329e872-a637-4017-8bd3-006e1e4c5a7d", 00:24:46.073 "strip_size_kb": 64, 00:24:46.073 "state": "configuring", 00:24:46.073 "raid_level": "raid5f", 00:24:46.073 "superblock": true, 00:24:46.073 "num_base_bdevs": 4, 00:24:46.074 "num_base_bdevs_discovered": 1, 00:24:46.074 "num_base_bdevs_operational": 4, 00:24:46.074 "base_bdevs_list": [ 00:24:46.074 { 00:24:46.074 "name": "pt1", 00:24:46.074 "uuid": "5fbdc192-47b7-5f57-beec-cba2714877a1", 00:24:46.074 "is_configured": true, 00:24:46.074 "data_offset": 2048, 00:24:46.074 "data_size": 63488 00:24:46.074 }, 00:24:46.074 { 00:24:46.074 "name": null, 00:24:46.074 "uuid": "e71fb2cb-7040-566a-b269-89fbeb90ac30", 00:24:46.074 "is_configured": false, 00:24:46.074 "data_offset": 2048, 00:24:46.074 "data_size": 63488 00:24:46.074 }, 00:24:46.074 { 00:24:46.074 "name": null, 00:24:46.074 "uuid": "73c0f665-4dd9-5794-b2fa-506ed91bf585", 00:24:46.074 "is_configured": false, 00:24:46.074 "data_offset": 2048, 00:24:46.074 "data_size": 63488 00:24:46.074 }, 00:24:46.074 { 00:24:46.074 "name": null, 00:24:46.074 "uuid": "d079d12f-d230-51d0-ad08-b62249860eac", 00:24:46.074 "is_configured": false, 00:24:46.074 "data_offset": 2048, 00:24:46.074 "data_size": 63488 00:24:46.074 } 00:24:46.074 ] 00:24:46.074 }' 00:24:46.074 07:25:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:46.074 07:25:19 -- common/autotest_common.sh@10 -- # set +x 00:24:46.640 07:25:20 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:24:46.640 07:25:20 -- bdev/bdev_raid.sh@416 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:46.899 [2024-02-13 07:25:20.442012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:46.899 [2024-02-13 07:25:20.442068] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:46.899 [2024-02-13 07:25:20.442103] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:24:46.899 [2024-02-13 07:25:20.442122] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:46.899 [2024-02-13 07:25:20.442513] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:46.899 [2024-02-13 07:25:20.442553] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:46.899 [2024-02-13 07:25:20.442631] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:46.899 [2024-02-13 07:25:20.442654] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:46.899 pt2 00:24:46.899 07:25:20 -- bdev/bdev_raid.sh@417 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:47.158 [2024-02-13 07:25:20.622061] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:24:47.158 07:25:20 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:47.158 07:25:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:47.158 07:25:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:47.158 07:25:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:47.158 07:25:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:47.158 07:25:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:47.158 07:25:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:47.158 07:25:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:47.158 07:25:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:47.158 07:25:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:47.158 07:25:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:47.158 07:25:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:47.158 07:25:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:47.158 "name": "raid_bdev1", 00:24:47.158 "uuid": "4329e872-a637-4017-8bd3-006e1e4c5a7d", 00:24:47.158 "strip_size_kb": 64, 00:24:47.158 "state": "configuring", 00:24:47.158 "raid_level": "raid5f", 00:24:47.158 "superblock": true, 00:24:47.158 "num_base_bdevs": 4, 00:24:47.158 "num_base_bdevs_discovered": 1, 00:24:47.158 "num_base_bdevs_operational": 4, 00:24:47.158 "base_bdevs_list": [ 00:24:47.158 { 00:24:47.158 "name": "pt1", 00:24:47.158 "uuid": "5fbdc192-47b7-5f57-beec-cba2714877a1", 00:24:47.158 "is_configured": true, 00:24:47.158 "data_offset": 2048, 00:24:47.158 "data_size": 63488 00:24:47.158 }, 00:24:47.158 { 00:24:47.158 "name": null, 00:24:47.158 "uuid": "e71fb2cb-7040-566a-b269-89fbeb90ac30", 00:24:47.158 "is_configured": false, 00:24:47.158 "data_offset": 2048, 00:24:47.158 "data_size": 63488 00:24:47.158 }, 00:24:47.158 { 00:24:47.158 "name": null, 00:24:47.158 "uuid": "73c0f665-4dd9-5794-b2fa-506ed91bf585", 00:24:47.158 "is_configured": false, 00:24:47.158 "data_offset": 2048, 00:24:47.158 "data_size": 63488 00:24:47.158 }, 00:24:47.158 { 00:24:47.158 "name": null, 00:24:47.158 "uuid": "d079d12f-d230-51d0-ad08-b62249860eac", 00:24:47.158 "is_configured": false, 00:24:47.158 "data_offset": 2048, 00:24:47.158 "data_size": 63488 00:24:47.158 } 00:24:47.158 ] 00:24:47.158 }' 00:24:47.158 07:25:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:47.158 07:25:20 -- common/autotest_common.sh@10 -- # set +x 00:24:47.725 07:25:21 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:24:47.725 07:25:21 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:47.725 07:25:21 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:47.983 [2024-02-13 07:25:21.574230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:47.983 [2024-02-13 07:25:21.574307] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:47.983 [2024-02-13 07:25:21.574348] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:24:47.983 [2024-02-13 07:25:21.574366] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:47.983 [2024-02-13 07:25:21.574770] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:47.983 [2024-02-13 07:25:21.574828] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:47.983 [2024-02-13 07:25:21.574939] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:47.983 [2024-02-13 07:25:21.574963] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:47.983 pt2 00:24:47.983 07:25:21 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:47.983 07:25:21 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:47.983 07:25:21 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:48.242 [2024-02-13 07:25:21.822269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:48.242 [2024-02-13 07:25:21.822345] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:48.242 [2024-02-13 07:25:21.822372] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:24:48.242 [2024-02-13 07:25:21.822397] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:48.242 [2024-02-13 07:25:21.822817] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:48.242 [2024-02-13 07:25:21.822923] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:48.242 [2024-02-13 07:25:21.823001] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:48.242 [2024-02-13 07:25:21.823023] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:48.242 pt3 00:24:48.242 07:25:21 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:48.242 07:25:21 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:48.242 07:25:21 -- bdev/bdev_raid.sh@423 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:48.502 [2024-02-13 07:25:22.026308] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:48.502 [2024-02-13 07:25:22.026393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:48.502 [2024-02-13 07:25:22.026423] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:24:48.502 [2024-02-13 07:25:22.026452] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:48.502 [2024-02-13 07:25:22.026860] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:48.502 [2024-02-13 07:25:22.026917] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:48.502 [2024-02-13 07:25:22.027030] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:24:48.502 [2024-02-13 07:25:22.027054] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:48.502 [2024-02-13 07:25:22.027191] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:24:48.502 [2024-02-13 07:25:22.027213] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:48.502 [2024-02-13 07:25:22.027308] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:24:48.502 [2024-02-13 07:25:22.032620] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:24:48.502 [2024-02-13 07:25:22.032653] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:24:48.502 [2024-02-13 07:25:22.032818] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:48.502 pt4 00:24:48.502 07:25:22 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:24:48.502 07:25:22 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:24:48.502 07:25:22 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:24:48.502 07:25:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:48.502 07:25:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:48.502 07:25:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:48.502 07:25:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:48.502 07:25:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:48.502 07:25:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:48.502 07:25:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:48.502 07:25:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:48.502 07:25:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:48.502 07:25:22 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:48.502 07:25:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:48.760 07:25:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:48.760 "name": "raid_bdev1", 00:24:48.760 "uuid": "4329e872-a637-4017-8bd3-006e1e4c5a7d", 00:24:48.760 "strip_size_kb": 64, 00:24:48.760 "state": "online", 00:24:48.760 "raid_level": "raid5f", 00:24:48.760 "superblock": true, 00:24:48.760 "num_base_bdevs": 4, 00:24:48.760 "num_base_bdevs_discovered": 4, 00:24:48.760 "num_base_bdevs_operational": 4, 00:24:48.760 "base_bdevs_list": [ 00:24:48.760 { 00:24:48.760 "name": "pt1", 00:24:48.760 "uuid": "5fbdc192-47b7-5f57-beec-cba2714877a1", 00:24:48.760 "is_configured": true, 00:24:48.760 "data_offset": 2048, 00:24:48.760 "data_size": 63488 00:24:48.760 }, 00:24:48.760 { 00:24:48.760 "name": "pt2", 00:24:48.760 "uuid": "e71fb2cb-7040-566a-b269-89fbeb90ac30", 00:24:48.760 "is_configured": true, 00:24:48.760 "data_offset": 2048, 00:24:48.760 "data_size": 63488 00:24:48.760 }, 00:24:48.760 { 00:24:48.760 "name": "pt3", 00:24:48.760 "uuid": "73c0f665-4dd9-5794-b2fa-506ed91bf585", 00:24:48.760 "is_configured": true, 00:24:48.760 "data_offset": 2048, 00:24:48.760 "data_size": 63488 00:24:48.760 }, 00:24:48.760 { 00:24:48.760 "name": "pt4", 00:24:48.760 "uuid": "d079d12f-d230-51d0-ad08-b62249860eac", 00:24:48.760 "is_configured": true, 00:24:48.760 "data_offset": 2048, 00:24:48.760 "data_size": 63488 00:24:48.760 } 00:24:48.760 ] 00:24:48.760 }' 00:24:48.760 07:25:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:48.760 07:25:22 -- common/autotest_common.sh@10 -- # set +x 00:24:49.326 07:25:22 -- bdev/bdev_raid.sh@430 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:49.326 07:25:22 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:24:49.326 [2024-02-13 07:25:22.975205] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:49.326 07:25:22 -- bdev/bdev_raid.sh@430 -- # '[' 4329e872-a637-4017-8bd3-006e1e4c5a7d '!=' 4329e872-a637-4017-8bd3-006e1e4c5a7d ']' 00:24:49.326 07:25:22 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid5f 00:24:49.326 07:25:22 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:24:49.326 07:25:22 -- bdev/bdev_raid.sh@196 -- # return 0 00:24:49.326 07:25:22 -- bdev/bdev_raid.sh@436 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:24:49.585 [2024-02-13 07:25:23.159105] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:24:49.585 07:25:23 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:49.585 07:25:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:49.585 07:25:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:49.585 07:25:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:49.585 07:25:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:49.585 07:25:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:49.585 07:25:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:49.585 07:25:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:49.585 07:25:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:49.585 07:25:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:49.585 07:25:23 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:49.585 07:25:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:49.843 07:25:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:49.843 "name": "raid_bdev1", 00:24:49.843 "uuid": "4329e872-a637-4017-8bd3-006e1e4c5a7d", 00:24:49.843 "strip_size_kb": 64, 00:24:49.843 "state": "online", 00:24:49.843 "raid_level": "raid5f", 00:24:49.843 "superblock": true, 00:24:49.843 "num_base_bdevs": 4, 00:24:49.843 "num_base_bdevs_discovered": 3, 00:24:49.843 "num_base_bdevs_operational": 3, 00:24:49.843 "base_bdevs_list": [ 00:24:49.843 { 00:24:49.843 "name": null, 00:24:49.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:49.843 "is_configured": false, 00:24:49.843 "data_offset": 2048, 00:24:49.843 "data_size": 63488 00:24:49.843 }, 00:24:49.843 { 00:24:49.843 "name": "pt2", 00:24:49.843 "uuid": "e71fb2cb-7040-566a-b269-89fbeb90ac30", 00:24:49.843 "is_configured": true, 00:24:49.843 "data_offset": 2048, 00:24:49.843 "data_size": 63488 00:24:49.843 }, 00:24:49.843 { 00:24:49.843 "name": "pt3", 00:24:49.843 "uuid": "73c0f665-4dd9-5794-b2fa-506ed91bf585", 00:24:49.843 "is_configured": true, 00:24:49.843 "data_offset": 2048, 00:24:49.843 "data_size": 63488 00:24:49.843 }, 00:24:49.843 { 00:24:49.843 "name": "pt4", 00:24:49.843 "uuid": "d079d12f-d230-51d0-ad08-b62249860eac", 00:24:49.843 "is_configured": true, 00:24:49.843 "data_offset": 2048, 00:24:49.843 "data_size": 63488 00:24:49.843 } 00:24:49.843 ] 00:24:49.843 }' 00:24:49.843 07:25:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:49.843 07:25:23 -- common/autotest_common.sh@10 -- # set +x 00:24:50.410 07:25:24 -- bdev/bdev_raid.sh@442 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:50.669 [2024-02-13 07:25:24.267370] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:50.669 [2024-02-13 07:25:24.267411] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:50.669 [2024-02-13 07:25:24.267509] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:50.669 [2024-02-13 07:25:24.267587] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:50.669 [2024-02-13 07:25:24.267599] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:24:50.669 07:25:24 -- bdev/bdev_raid.sh@443 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:50.669 07:25:24 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:24:50.928 07:25:24 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:24:50.928 07:25:24 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:24:50.928 07:25:24 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:24:50.928 07:25:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:50.928 07:25:24 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:51.187 07:25:24 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:51.187 07:25:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:51.187 07:25:24 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:51.446 07:25:24 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:51.446 07:25:24 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:51.446 07:25:24 -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:51.446 07:25:25 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:24:51.446 07:25:25 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:24:51.446 07:25:25 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:24:51.446 07:25:25 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:51.446 07:25:25 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:51.705 [2024-02-13 07:25:25.295525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:51.705 [2024-02-13 07:25:25.295622] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:51.705 [2024-02-13 07:25:25.295657] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:24:51.705 [2024-02-13 07:25:25.295684] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:51.705 [2024-02-13 07:25:25.298004] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:51.705 [2024-02-13 07:25:25.298069] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:51.705 [2024-02-13 07:25:25.298169] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:51.705 [2024-02-13 07:25:25.298251] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:51.705 pt2 00:24:51.705 07:25:25 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:51.705 07:25:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:51.705 07:25:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:51.705 07:25:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:51.705 07:25:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:51.705 07:25:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:51.705 07:25:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:51.705 07:25:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:51.705 07:25:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:51.705 07:25:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:51.705 07:25:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:51.705 07:25:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:51.964 07:25:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:51.964 "name": "raid_bdev1", 00:24:51.964 "uuid": "4329e872-a637-4017-8bd3-006e1e4c5a7d", 00:24:51.964 "strip_size_kb": 64, 00:24:51.964 "state": "configuring", 00:24:51.964 "raid_level": "raid5f", 00:24:51.964 "superblock": true, 00:24:51.964 "num_base_bdevs": 4, 00:24:51.964 "num_base_bdevs_discovered": 1, 00:24:51.964 "num_base_bdevs_operational": 3, 00:24:51.964 "base_bdevs_list": [ 00:24:51.964 { 00:24:51.964 "name": null, 00:24:51.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:51.964 "is_configured": false, 00:24:51.964 "data_offset": 2048, 00:24:51.964 "data_size": 63488 00:24:51.964 }, 00:24:51.964 { 00:24:51.964 "name": "pt2", 00:24:51.964 "uuid": "e71fb2cb-7040-566a-b269-89fbeb90ac30", 00:24:51.964 "is_configured": true, 00:24:51.964 "data_offset": 2048, 00:24:51.964 "data_size": 63488 00:24:51.964 }, 00:24:51.964 { 00:24:51.964 "name": null, 00:24:51.964 "uuid": "73c0f665-4dd9-5794-b2fa-506ed91bf585", 00:24:51.964 "is_configured": false, 00:24:51.964 "data_offset": 2048, 00:24:51.964 "data_size": 63488 00:24:51.964 }, 00:24:51.964 { 00:24:51.964 "name": null, 00:24:51.964 "uuid": "d079d12f-d230-51d0-ad08-b62249860eac", 00:24:51.964 "is_configured": false, 00:24:51.964 "data_offset": 2048, 00:24:51.964 "data_size": 63488 00:24:51.964 } 00:24:51.964 ] 00:24:51.964 }' 00:24:51.964 07:25:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:51.964 07:25:25 -- common/autotest_common.sh@10 -- # set +x 00:24:52.532 07:25:26 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:24:52.532 07:25:26 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:52.532 07:25:26 -- bdev/bdev_raid.sh@455 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:52.532 [2024-02-13 07:25:26.203710] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:52.532 [2024-02-13 07:25:26.203802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:52.532 [2024-02-13 07:25:26.203863] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:24:52.532 [2024-02-13 07:25:26.203925] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:52.532 [2024-02-13 07:25:26.204422] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:52.532 [2024-02-13 07:25:26.204483] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:52.532 [2024-02-13 07:25:26.204625] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:52.532 [2024-02-13 07:25:26.204668] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:52.532 pt3 00:24:52.532 07:25:26 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:52.532 07:25:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:52.532 07:25:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:52.532 07:25:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:52.532 07:25:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:52.532 07:25:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:52.532 07:25:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:52.532 07:25:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:52.532 07:25:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:52.532 07:25:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:52.532 07:25:26 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:52.532 07:25:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:52.791 07:25:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:52.791 "name": "raid_bdev1", 00:24:52.791 "uuid": "4329e872-a637-4017-8bd3-006e1e4c5a7d", 00:24:52.791 "strip_size_kb": 64, 00:24:52.791 "state": "configuring", 00:24:52.791 "raid_level": "raid5f", 00:24:52.791 "superblock": true, 00:24:52.791 "num_base_bdevs": 4, 00:24:52.791 "num_base_bdevs_discovered": 2, 00:24:52.791 "num_base_bdevs_operational": 3, 00:24:52.791 "base_bdevs_list": [ 00:24:52.791 { 00:24:52.791 "name": null, 00:24:52.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:52.791 "is_configured": false, 00:24:52.791 "data_offset": 2048, 00:24:52.791 "data_size": 63488 00:24:52.791 }, 00:24:52.791 { 00:24:52.791 "name": "pt2", 00:24:52.791 "uuid": "e71fb2cb-7040-566a-b269-89fbeb90ac30", 00:24:52.791 "is_configured": true, 00:24:52.791 "data_offset": 2048, 00:24:52.791 "data_size": 63488 00:24:52.791 }, 00:24:52.791 { 00:24:52.791 "name": "pt3", 00:24:52.791 "uuid": "73c0f665-4dd9-5794-b2fa-506ed91bf585", 00:24:52.791 "is_configured": true, 00:24:52.791 "data_offset": 2048, 00:24:52.791 "data_size": 63488 00:24:52.791 }, 00:24:52.791 { 00:24:52.791 "name": null, 00:24:52.791 "uuid": "d079d12f-d230-51d0-ad08-b62249860eac", 00:24:52.791 "is_configured": false, 00:24:52.791 "data_offset": 2048, 00:24:52.791 "data_size": 63488 00:24:52.791 } 00:24:52.791 ] 00:24:52.791 }' 00:24:52.791 07:25:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:52.791 07:25:26 -- common/autotest_common.sh@10 -- # set +x 00:24:53.728 07:25:27 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:24:53.728 07:25:27 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:24:53.728 07:25:27 -- bdev/bdev_raid.sh@462 -- # i=3 00:24:53.728 07:25:27 -- bdev/bdev_raid.sh@463 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:53.728 [2024-02-13 07:25:27.291919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:53.728 [2024-02-13 07:25:27.292043] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:53.728 [2024-02-13 07:25:27.292103] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:24:53.728 [2024-02-13 07:25:27.292164] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:53.728 [2024-02-13 07:25:27.292784] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:53.728 [2024-02-13 07:25:27.292846] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:53.728 [2024-02-13 07:25:27.293026] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:24:53.728 [2024-02-13 07:25:27.293092] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:53.728 [2024-02-13 07:25:27.293292] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000bd80 00:24:53.728 [2024-02-13 07:25:27.293318] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:53.728 [2024-02-13 07:25:27.293468] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:24:53.728 [2024-02-13 07:25:27.299081] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000bd80 00:24:53.728 [2024-02-13 07:25:27.299111] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000bd80 00:24:53.728 pt4 00:24:53.728 [2024-02-13 07:25:27.299444] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:53.728 07:25:27 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:53.728 07:25:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:53.728 07:25:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:53.728 07:25:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:53.728 07:25:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:53.728 07:25:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:53.728 07:25:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:53.728 07:25:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:53.728 07:25:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:53.728 07:25:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:53.728 07:25:27 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:53.728 07:25:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:53.987 07:25:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:53.987 "name": "raid_bdev1", 00:24:53.987 "uuid": "4329e872-a637-4017-8bd3-006e1e4c5a7d", 00:24:53.987 "strip_size_kb": 64, 00:24:53.987 "state": "online", 00:24:53.987 "raid_level": "raid5f", 00:24:53.987 "superblock": true, 00:24:53.987 "num_base_bdevs": 4, 00:24:53.987 "num_base_bdevs_discovered": 3, 00:24:53.987 "num_base_bdevs_operational": 3, 00:24:53.987 "base_bdevs_list": [ 00:24:53.987 { 00:24:53.987 "name": null, 00:24:53.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:53.987 "is_configured": false, 00:24:53.987 "data_offset": 2048, 00:24:53.987 "data_size": 63488 00:24:53.987 }, 00:24:53.987 { 00:24:53.987 "name": "pt2", 00:24:53.987 "uuid": "e71fb2cb-7040-566a-b269-89fbeb90ac30", 00:24:53.987 "is_configured": true, 00:24:53.987 "data_offset": 2048, 00:24:53.987 "data_size": 63488 00:24:53.987 }, 00:24:53.987 { 00:24:53.987 "name": "pt3", 00:24:53.987 "uuid": "73c0f665-4dd9-5794-b2fa-506ed91bf585", 00:24:53.987 "is_configured": true, 00:24:53.987 "data_offset": 2048, 00:24:53.987 "data_size": 63488 00:24:53.987 }, 00:24:53.987 { 00:24:53.987 "name": "pt4", 00:24:53.987 "uuid": "d079d12f-d230-51d0-ad08-b62249860eac", 00:24:53.987 "is_configured": true, 00:24:53.987 "data_offset": 2048, 00:24:53.987 "data_size": 63488 00:24:53.987 } 00:24:53.987 ] 00:24:53.987 }' 00:24:53.987 07:25:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:53.987 07:25:27 -- common/autotest_common.sh@10 -- # set +x 00:24:54.554 07:25:28 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:24:54.554 07:25:28 -- bdev/bdev_raid.sh@470 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:24:54.812 [2024-02-13 07:25:28.445793] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:54.812 [2024-02-13 07:25:28.445823] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:24:54.812 [2024-02-13 07:25:28.445893] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:54.812 [2024-02-13 07:25:28.445965] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:54.812 [2024-02-13 07:25:28.445977] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state offline 00:24:54.812 07:25:28 -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:54.812 07:25:28 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:24:55.070 07:25:28 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:24:55.070 07:25:28 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:24:55.070 07:25:28 -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:24:55.329 [2024-02-13 07:25:28.856505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:24:55.329 [2024-02-13 07:25:28.856599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:55.329 [2024-02-13 07:25:28.856643] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:24:55.329 [2024-02-13 07:25:28.856665] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:55.329 [2024-02-13 07:25:28.858851] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:55.329 [2024-02-13 07:25:28.858937] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:24:55.329 [2024-02-13 07:25:28.859051] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:24:55.329 [2024-02-13 07:25:28.859105] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:24:55.329 pt1 00:24:55.329 07:25:28 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:24:55.329 07:25:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:55.329 07:25:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:55.329 07:25:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:55.329 07:25:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:55.329 07:25:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:24:55.329 07:25:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:55.329 07:25:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:55.329 07:25:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:55.329 07:25:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:55.329 07:25:28 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:55.329 07:25:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:55.588 07:25:29 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:55.588 "name": "raid_bdev1", 00:24:55.588 "uuid": "4329e872-a637-4017-8bd3-006e1e4c5a7d", 00:24:55.588 "strip_size_kb": 64, 00:24:55.588 "state": "configuring", 00:24:55.588 "raid_level": "raid5f", 00:24:55.588 "superblock": true, 00:24:55.588 "num_base_bdevs": 4, 00:24:55.588 "num_base_bdevs_discovered": 1, 00:24:55.588 "num_base_bdevs_operational": 4, 00:24:55.588 "base_bdevs_list": [ 00:24:55.588 { 00:24:55.588 "name": "pt1", 00:24:55.588 "uuid": "5fbdc192-47b7-5f57-beec-cba2714877a1", 00:24:55.588 "is_configured": true, 00:24:55.588 "data_offset": 2048, 00:24:55.588 "data_size": 63488 00:24:55.588 }, 00:24:55.588 { 00:24:55.588 "name": null, 00:24:55.588 "uuid": "e71fb2cb-7040-566a-b269-89fbeb90ac30", 00:24:55.588 "is_configured": false, 00:24:55.588 "data_offset": 2048, 00:24:55.588 "data_size": 63488 00:24:55.588 }, 00:24:55.588 { 00:24:55.588 "name": null, 00:24:55.588 "uuid": "73c0f665-4dd9-5794-b2fa-506ed91bf585", 00:24:55.588 "is_configured": false, 00:24:55.588 "data_offset": 2048, 00:24:55.588 "data_size": 63488 00:24:55.588 }, 00:24:55.588 { 00:24:55.588 "name": null, 00:24:55.588 "uuid": "d079d12f-d230-51d0-ad08-b62249860eac", 00:24:55.588 "is_configured": false, 00:24:55.588 "data_offset": 2048, 00:24:55.588 "data_size": 63488 00:24:55.588 } 00:24:55.588 ] 00:24:55.588 }' 00:24:55.588 07:25:29 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:55.588 07:25:29 -- common/autotest_common.sh@10 -- # set +x 00:24:56.155 07:25:29 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:24:56.155 07:25:29 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:56.155 07:25:29 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:24:56.414 07:25:29 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:56.414 07:25:29 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:56.414 07:25:29 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:24:56.672 07:25:30 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:56.672 07:25:30 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:56.672 07:25:30 -- bdev/bdev_raid.sh@485 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:24:56.672 07:25:30 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:24:56.672 07:25:30 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:24:56.672 07:25:30 -- bdev/bdev_raid.sh@489 -- # i=3 00:24:56.672 07:25:30 -- bdev/bdev_raid.sh@490 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:24:56.931 [2024-02-13 07:25:30.528827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:24:56.931 [2024-02-13 07:25:30.528919] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:56.931 [2024-02-13 07:25:30.528952] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cf80 00:24:56.931 [2024-02-13 07:25:30.528978] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:56.931 [2024-02-13 07:25:30.529439] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:56.931 [2024-02-13 07:25:30.529505] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:24:56.931 [2024-02-13 07:25:30.529632] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:24:56.931 [2024-02-13 07:25:30.529649] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:24:56.931 [2024-02-13 07:25:30.529657] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:24:56.931 [2024-02-13 07:25:30.529700] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000cc80 name raid_bdev1, state configuring 00:24:56.931 [2024-02-13 07:25:30.529800] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:24:56.931 pt4 00:24:56.931 07:25:30 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:24:56.931 07:25:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:56.931 07:25:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:24:56.931 07:25:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:56.931 07:25:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:56.931 07:25:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:56.931 07:25:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:56.931 07:25:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:56.931 07:25:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:56.931 07:25:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:56.931 07:25:30 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:56.931 07:25:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:57.190 07:25:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:57.190 "name": "raid_bdev1", 00:24:57.190 "uuid": "4329e872-a637-4017-8bd3-006e1e4c5a7d", 00:24:57.190 "strip_size_kb": 64, 00:24:57.190 "state": "configuring", 00:24:57.190 "raid_level": "raid5f", 00:24:57.190 "superblock": true, 00:24:57.190 "num_base_bdevs": 4, 00:24:57.190 "num_base_bdevs_discovered": 1, 00:24:57.190 "num_base_bdevs_operational": 3, 00:24:57.190 "base_bdevs_list": [ 00:24:57.190 { 00:24:57.190 "name": null, 00:24:57.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:57.190 "is_configured": false, 00:24:57.190 "data_offset": 2048, 00:24:57.190 "data_size": 63488 00:24:57.190 }, 00:24:57.190 { 00:24:57.190 "name": null, 00:24:57.190 "uuid": "e71fb2cb-7040-566a-b269-89fbeb90ac30", 00:24:57.190 "is_configured": false, 00:24:57.190 "data_offset": 2048, 00:24:57.190 "data_size": 63488 00:24:57.190 }, 00:24:57.190 { 00:24:57.190 "name": null, 00:24:57.190 "uuid": "73c0f665-4dd9-5794-b2fa-506ed91bf585", 00:24:57.190 "is_configured": false, 00:24:57.190 "data_offset": 2048, 00:24:57.190 "data_size": 63488 00:24:57.190 }, 00:24:57.190 { 00:24:57.190 "name": "pt4", 00:24:57.190 "uuid": "d079d12f-d230-51d0-ad08-b62249860eac", 00:24:57.190 "is_configured": true, 00:24:57.190 "data_offset": 2048, 00:24:57.190 "data_size": 63488 00:24:57.190 } 00:24:57.190 ] 00:24:57.190 }' 00:24:57.190 07:25:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:57.190 07:25:30 -- common/autotest_common.sh@10 -- # set +x 00:24:57.759 07:25:31 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:24:57.759 07:25:31 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:57.759 07:25:31 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:24:58.017 [2024-02-13 07:25:31.561022] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:24:58.017 [2024-02-13 07:25:31.561124] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:58.017 [2024-02-13 07:25:31.561165] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d580 00:24:58.017 [2024-02-13 07:25:31.561195] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:58.017 [2024-02-13 07:25:31.561640] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:58.017 [2024-02-13 07:25:31.561741] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:24:58.017 [2024-02-13 07:25:31.561862] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:24:58.017 [2024-02-13 07:25:31.561891] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:24:58.017 pt2 00:24:58.017 07:25:31 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:24:58.017 07:25:31 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:58.017 07:25:31 -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:24:58.276 [2024-02-13 07:25:31.789076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:24:58.276 [2024-02-13 07:25:31.789160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:24:58.276 [2024-02-13 07:25:31.789191] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000d880 00:24:58.276 [2024-02-13 07:25:31.789217] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:24:58.276 [2024-02-13 07:25:31.789646] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:24:58.276 [2024-02-13 07:25:31.789736] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:24:58.276 [2024-02-13 07:25:31.789842] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:24:58.276 [2024-02-13 07:25:31.789871] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:24:58.276 [2024-02-13 07:25:31.789997] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000d280 00:24:58.276 [2024-02-13 07:25:31.790023] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:24:58.276 [2024-02-13 07:25:31.790129] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:24:58.276 [2024-02-13 07:25:31.795373] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000d280 00:24:58.276 [2024-02-13 07:25:31.795403] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000d280 00:24:58.276 pt3 00:24:58.276 [2024-02-13 07:25:31.795688] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:24:58.276 07:25:31 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:24:58.276 07:25:31 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:24:58.276 07:25:31 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:24:58.276 07:25:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:24:58.276 07:25:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:24:58.276 07:25:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:24:58.276 07:25:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:24:58.276 07:25:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:24:58.276 07:25:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:24:58.276 07:25:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:24:58.276 07:25:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:24:58.276 07:25:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:24:58.276 07:25:31 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:24:58.276 07:25:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:24:58.535 07:25:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:24:58.535 "name": "raid_bdev1", 00:24:58.535 "uuid": "4329e872-a637-4017-8bd3-006e1e4c5a7d", 00:24:58.535 "strip_size_kb": 64, 00:24:58.535 "state": "online", 00:24:58.535 "raid_level": "raid5f", 00:24:58.535 "superblock": true, 00:24:58.535 "num_base_bdevs": 4, 00:24:58.535 "num_base_bdevs_discovered": 3, 00:24:58.535 "num_base_bdevs_operational": 3, 00:24:58.535 "base_bdevs_list": [ 00:24:58.535 { 00:24:58.535 "name": null, 00:24:58.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:24:58.535 "is_configured": false, 00:24:58.535 "data_offset": 2048, 00:24:58.535 "data_size": 63488 00:24:58.535 }, 00:24:58.535 { 00:24:58.535 "name": "pt2", 00:24:58.535 "uuid": "e71fb2cb-7040-566a-b269-89fbeb90ac30", 00:24:58.535 "is_configured": true, 00:24:58.535 "data_offset": 2048, 00:24:58.535 "data_size": 63488 00:24:58.535 }, 00:24:58.535 { 00:24:58.535 "name": "pt3", 00:24:58.535 "uuid": "73c0f665-4dd9-5794-b2fa-506ed91bf585", 00:24:58.535 "is_configured": true, 00:24:58.535 "data_offset": 2048, 00:24:58.535 "data_size": 63488 00:24:58.535 }, 00:24:58.535 { 00:24:58.535 "name": "pt4", 00:24:58.535 "uuid": "d079d12f-d230-51d0-ad08-b62249860eac", 00:24:58.535 "is_configured": true, 00:24:58.535 "data_offset": 2048, 00:24:58.535 "data_size": 63488 00:24:58.535 } 00:24:58.535 ] 00:24:58.535 }' 00:24:58.535 07:25:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:24:58.535 07:25:31 -- common/autotest_common.sh@10 -- # set +x 00:24:59.102 07:25:32 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:24:59.102 07:25:32 -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:24:59.360 [2024-02-13 07:25:32.805616] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:24:59.360 07:25:32 -- bdev/bdev_raid.sh@506 -- # '[' 4329e872-a637-4017-8bd3-006e1e4c5a7d '!=' 4329e872-a637-4017-8bd3-006e1e4c5a7d ']' 00:24:59.360 07:25:32 -- bdev/bdev_raid.sh@511 -- # killprocess 135862 00:24:59.360 07:25:32 -- common/autotest_common.sh@924 -- # '[' -z 135862 ']' 00:24:59.360 07:25:32 -- common/autotest_common.sh@928 -- # kill -0 135862 00:24:59.360 07:25:32 -- common/autotest_common.sh@929 -- # uname 00:24:59.360 07:25:32 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:24:59.360 07:25:32 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 135862 00:24:59.360 killing process with pid 135862 00:24:59.360 07:25:32 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:24:59.360 07:25:32 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:24:59.360 07:25:32 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 135862' 00:24:59.360 07:25:32 -- common/autotest_common.sh@943 -- # kill 135862 00:24:59.360 07:25:32 -- common/autotest_common.sh@948 -- # wait 135862 00:24:59.360 [2024-02-13 07:25:32.839496] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:24:59.360 [2024-02-13 07:25:32.839559] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:24:59.360 [2024-02-13 07:25:32.839659] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:24:59.360 [2024-02-13 07:25:32.839682] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000d280 name raid_bdev1, state offline 00:24:59.619 [2024-02-13 07:25:33.095940] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:00.556 ************************************ 00:25:00.556 END TEST raid5f_superblock_test 00:25:00.556 ************************************ 00:25:00.556 07:25:34 -- bdev/bdev_raid.sh@513 -- # return 0 00:25:00.556 00:25:00.556 real 0m20.702s 00:25:00.556 user 0m38.189s 00:25:00.556 sys 0m2.311s 00:25:00.556 07:25:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:00.557 07:25:34 -- common/autotest_common.sh@10 -- # set +x 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@747 -- # '[' true = true ']' 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@748 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false 00:25:00.557 07:25:34 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:25:00.557 07:25:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:25:00.557 07:25:34 -- common/autotest_common.sh@10 -- # set +x 00:25:00.557 ************************************ 00:25:00.557 START TEST raid5f_rebuild_test 00:25:00.557 ************************************ 00:25:00.557 07:25:34 -- common/autotest_common.sh@1102 -- # raid_rebuild_test raid5f 4 false false 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@544 -- # raid_pid=136550 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@545 -- # waitforlisten 136550 /var/tmp/spdk-raid.sock 00:25:00.557 07:25:34 -- common/autotest_common.sh@817 -- # '[' -z 136550 ']' 00:25:00.557 07:25:34 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:00.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:00.557 07:25:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:00.557 07:25:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:00.557 07:25:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:00.557 07:25:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:00.557 07:25:34 -- common/autotest_common.sh@10 -- # set +x 00:25:00.557 [2024-02-13 07:25:34.177128] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:25:00.557 [2024-02-13 07:25:34.177267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136550 ] 00:25:00.557 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:00.557 Zero copy mechanism will not be used. 00:25:00.817 [2024-02-13 07:25:34.332702] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.079 [2024-02-13 07:25:34.559757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.079 [2024-02-13 07:25:34.737449] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:01.647 07:25:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:01.647 07:25:35 -- common/autotest_common.sh@850 -- # return 0 00:25:01.647 07:25:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:01.647 07:25:35 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:01.647 07:25:35 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:25:01.647 BaseBdev1 00:25:01.905 07:25:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:01.905 07:25:35 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:01.905 07:25:35 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:25:02.163 BaseBdev2 00:25:02.163 07:25:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:02.163 07:25:35 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:02.163 07:25:35 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:25:02.163 BaseBdev3 00:25:02.163 07:25:35 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:02.163 07:25:35 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:25:02.163 07:25:35 -- bdev/bdev_raid.sh@553 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:25:02.423 BaseBdev4 00:25:02.423 07:25:36 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:02.681 spare_malloc 00:25:02.681 07:25:36 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:02.940 spare_delay 00:25:02.940 07:25:36 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:03.199 [2024-02-13 07:25:36.683935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:03.199 [2024-02-13 07:25:36.684034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:03.199 [2024-02-13 07:25:36.684070] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:25:03.199 [2024-02-13 07:25:36.684116] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:03.199 [2024-02-13 07:25:36.686423] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:03.199 [2024-02-13 07:25:36.686471] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:03.199 spare 00:25:03.199 07:25:36 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:03.199 [2024-02-13 07:25:36.891990] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:03.457 [2024-02-13 07:25:36.893919] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:03.457 [2024-02-13 07:25:36.893973] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:03.458 [2024-02-13 07:25:36.894018] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:03.458 [2024-02-13 07:25:36.894103] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x616000009080 00:25:03.458 [2024-02-13 07:25:36.894114] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:03.458 [2024-02-13 07:25:36.894265] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005860 00:25:03.458 [2024-02-13 07:25:36.899761] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x616000009080 00:25:03.458 [2024-02-13 07:25:36.899787] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x616000009080 00:25:03.458 [2024-02-13 07:25:36.899966] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:03.458 07:25:36 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:03.458 07:25:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:03.458 07:25:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:03.458 07:25:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:03.458 07:25:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:03.458 07:25:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:03.458 07:25:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:03.458 07:25:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:03.458 07:25:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:03.458 07:25:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:03.458 07:25:36 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:03.458 07:25:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:03.458 07:25:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:03.458 "name": "raid_bdev1", 00:25:03.458 "uuid": "9b63a3d3-aad4-4320-8813-78de9ea8e52a", 00:25:03.458 "strip_size_kb": 64, 00:25:03.458 "state": "online", 00:25:03.458 "raid_level": "raid5f", 00:25:03.458 "superblock": false, 00:25:03.458 "num_base_bdevs": 4, 00:25:03.458 "num_base_bdevs_discovered": 4, 00:25:03.458 "num_base_bdevs_operational": 4, 00:25:03.458 "base_bdevs_list": [ 00:25:03.458 { 00:25:03.458 "name": "BaseBdev1", 00:25:03.458 "uuid": "7c5b474a-d15c-4dc9-8174-2614dec4a0ff", 00:25:03.458 "is_configured": true, 00:25:03.458 "data_offset": 0, 00:25:03.458 "data_size": 65536 00:25:03.458 }, 00:25:03.458 { 00:25:03.458 "name": "BaseBdev2", 00:25:03.458 "uuid": "5259241f-ec03-4be8-8f43-b854b4b7c915", 00:25:03.458 "is_configured": true, 00:25:03.458 "data_offset": 0, 00:25:03.458 "data_size": 65536 00:25:03.458 }, 00:25:03.458 { 00:25:03.458 "name": "BaseBdev3", 00:25:03.458 "uuid": "ae50b943-02a5-426d-8f8e-de21242bf3ae", 00:25:03.458 "is_configured": true, 00:25:03.458 "data_offset": 0, 00:25:03.458 "data_size": 65536 00:25:03.458 }, 00:25:03.458 { 00:25:03.458 "name": "BaseBdev4", 00:25:03.458 "uuid": "21fb3133-e4c7-44dc-a3e6-ac4f90b28a3a", 00:25:03.458 "is_configured": true, 00:25:03.458 "data_offset": 0, 00:25:03.458 "data_size": 65536 00:25:03.458 } 00:25:03.458 ] 00:25:03.458 }' 00:25:03.458 07:25:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:03.458 07:25:37 -- common/autotest_common.sh@10 -- # set +x 00:25:04.394 07:25:37 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:04.394 07:25:37 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:04.394 [2024-02-13 07:25:37.938468] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:04.394 07:25:37 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=196608 00:25:04.394 07:25:37 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:04.394 07:25:37 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:04.653 07:25:38 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:25:04.653 07:25:38 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:04.653 07:25:38 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:04.653 07:25:38 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:04.653 07:25:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:04.653 07:25:38 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:04.653 07:25:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:04.653 07:25:38 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:04.653 07:25:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:04.653 07:25:38 -- bdev/nbd_common.sh@12 -- # local i 00:25:04.653 07:25:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:04.653 07:25:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:04.653 07:25:38 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:04.913 [2024-02-13 07:25:38.370424] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005a00 00:25:04.913 /dev/nbd0 00:25:04.913 07:25:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:04.913 07:25:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:04.913 07:25:38 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:25:04.913 07:25:38 -- common/autotest_common.sh@855 -- # local i 00:25:04.913 07:25:38 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:04.913 07:25:38 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:04.913 07:25:38 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:25:04.913 07:25:38 -- common/autotest_common.sh@859 -- # break 00:25:04.913 07:25:38 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:04.913 07:25:38 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:04.913 07:25:38 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:04.913 1+0 records in 00:25:04.913 1+0 records out 00:25:04.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377481 s, 10.9 MB/s 00:25:04.913 07:25:38 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:04.913 07:25:38 -- common/autotest_common.sh@872 -- # size=4096 00:25:04.913 07:25:38 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:04.913 07:25:38 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:04.913 07:25:38 -- common/autotest_common.sh@875 -- # return 0 00:25:04.913 07:25:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:04.913 07:25:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:04.913 07:25:38 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:04.913 07:25:38 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:25:04.913 07:25:38 -- bdev/bdev_raid.sh@582 -- # echo 192 00:25:04.913 07:25:38 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:25:05.482 512+0 records in 00:25:05.482 512+0 records out 00:25:05.482 100663296 bytes (101 MB, 96 MiB) copied, 0.437535 s, 230 MB/s 00:25:05.482 07:25:38 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:05.482 07:25:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:05.482 07:25:38 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:05.482 07:25:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:05.482 07:25:38 -- bdev/nbd_common.sh@51 -- # local i 00:25:05.482 07:25:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:05.482 07:25:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:05.482 07:25:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:05.482 07:25:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:05.482 07:25:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:05.482 07:25:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:05.482 07:25:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:05.482 07:25:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:05.482 07:25:39 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:05.482 [2024-02-13 07:25:39.132868] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:05.740 07:25:39 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:05.740 07:25:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:05.740 07:25:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:05.740 07:25:39 -- bdev/nbd_common.sh@41 -- # break 00:25:05.740 07:25:39 -- bdev/nbd_common.sh@45 -- # return 0 00:25:05.740 07:25:39 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:05.740 [2024-02-13 07:25:39.408280] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:05.740 07:25:39 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:05.740 07:25:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:05.740 07:25:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:05.740 07:25:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:05.740 07:25:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:05.741 07:25:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:05.741 07:25:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:05.741 07:25:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:05.741 07:25:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:05.741 07:25:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:05.741 07:25:39 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:05.741 07:25:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:05.999 07:25:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:05.999 "name": "raid_bdev1", 00:25:05.999 "uuid": "9b63a3d3-aad4-4320-8813-78de9ea8e52a", 00:25:05.999 "strip_size_kb": 64, 00:25:05.999 "state": "online", 00:25:05.999 "raid_level": "raid5f", 00:25:05.999 "superblock": false, 00:25:05.999 "num_base_bdevs": 4, 00:25:05.999 "num_base_bdevs_discovered": 3, 00:25:05.999 "num_base_bdevs_operational": 3, 00:25:05.999 "base_bdevs_list": [ 00:25:05.999 { 00:25:05.999 "name": null, 00:25:05.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:06.000 "is_configured": false, 00:25:06.000 "data_offset": 0, 00:25:06.000 "data_size": 65536 00:25:06.000 }, 00:25:06.000 { 00:25:06.000 "name": "BaseBdev2", 00:25:06.000 "uuid": "5259241f-ec03-4be8-8f43-b854b4b7c915", 00:25:06.000 "is_configured": true, 00:25:06.000 "data_offset": 0, 00:25:06.000 "data_size": 65536 00:25:06.000 }, 00:25:06.000 { 00:25:06.000 "name": "BaseBdev3", 00:25:06.000 "uuid": "ae50b943-02a5-426d-8f8e-de21242bf3ae", 00:25:06.000 "is_configured": true, 00:25:06.000 "data_offset": 0, 00:25:06.000 "data_size": 65536 00:25:06.000 }, 00:25:06.000 { 00:25:06.000 "name": "BaseBdev4", 00:25:06.000 "uuid": "21fb3133-e4c7-44dc-a3e6-ac4f90b28a3a", 00:25:06.000 "is_configured": true, 00:25:06.000 "data_offset": 0, 00:25:06.000 "data_size": 65536 00:25:06.000 } 00:25:06.000 ] 00:25:06.000 }' 00:25:06.000 07:25:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:06.000 07:25:39 -- common/autotest_common.sh@10 -- # set +x 00:25:06.936 07:25:40 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:06.936 [2024-02-13 07:25:40.452439] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:06.936 [2024-02-13 07:25:40.452485] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:06.936 [2024-02-13 07:25:40.462492] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d220 00:25:06.936 [2024-02-13 07:25:40.469007] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:06.936 07:25:40 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:07.874 07:25:41 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:07.874 07:25:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:07.874 07:25:41 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:07.874 07:25:41 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:07.874 07:25:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:07.874 07:25:41 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:07.874 07:25:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.133 07:25:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:08.133 "name": "raid_bdev1", 00:25:08.133 "uuid": "9b63a3d3-aad4-4320-8813-78de9ea8e52a", 00:25:08.133 "strip_size_kb": 64, 00:25:08.133 "state": "online", 00:25:08.133 "raid_level": "raid5f", 00:25:08.133 "superblock": false, 00:25:08.133 "num_base_bdevs": 4, 00:25:08.133 "num_base_bdevs_discovered": 4, 00:25:08.133 "num_base_bdevs_operational": 4, 00:25:08.133 "process": { 00:25:08.133 "type": "rebuild", 00:25:08.133 "target": "spare", 00:25:08.133 "progress": { 00:25:08.133 "blocks": 23040, 00:25:08.133 "percent": 11 00:25:08.133 } 00:25:08.133 }, 00:25:08.133 "base_bdevs_list": [ 00:25:08.133 { 00:25:08.133 "name": "spare", 00:25:08.133 "uuid": "15e6199b-0be5-558e-a47d-d99a40d56fda", 00:25:08.133 "is_configured": true, 00:25:08.133 "data_offset": 0, 00:25:08.133 "data_size": 65536 00:25:08.133 }, 00:25:08.133 { 00:25:08.133 "name": "BaseBdev2", 00:25:08.133 "uuid": "5259241f-ec03-4be8-8f43-b854b4b7c915", 00:25:08.133 "is_configured": true, 00:25:08.133 "data_offset": 0, 00:25:08.133 "data_size": 65536 00:25:08.133 }, 00:25:08.133 { 00:25:08.133 "name": "BaseBdev3", 00:25:08.133 "uuid": "ae50b943-02a5-426d-8f8e-de21242bf3ae", 00:25:08.133 "is_configured": true, 00:25:08.133 "data_offset": 0, 00:25:08.133 "data_size": 65536 00:25:08.133 }, 00:25:08.133 { 00:25:08.133 "name": "BaseBdev4", 00:25:08.133 "uuid": "21fb3133-e4c7-44dc-a3e6-ac4f90b28a3a", 00:25:08.133 "is_configured": true, 00:25:08.133 "data_offset": 0, 00:25:08.133 "data_size": 65536 00:25:08.133 } 00:25:08.133 ] 00:25:08.133 }' 00:25:08.133 07:25:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:08.133 07:25:41 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:08.133 07:25:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:08.133 07:25:41 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:08.133 07:25:41 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:08.393 [2024-02-13 07:25:42.030104] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:08.393 [2024-02-13 07:25:42.079246] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:08.393 [2024-02-13 07:25:42.079360] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:08.652 07:25:42 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:08.652 07:25:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:08.652 07:25:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:08.652 07:25:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:08.652 07:25:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:08.652 07:25:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:08.652 07:25:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:08.652 07:25:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:08.652 07:25:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:08.652 07:25:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:08.652 07:25:42 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:08.652 07:25:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:08.912 07:25:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:08.912 "name": "raid_bdev1", 00:25:08.912 "uuid": "9b63a3d3-aad4-4320-8813-78de9ea8e52a", 00:25:08.912 "strip_size_kb": 64, 00:25:08.912 "state": "online", 00:25:08.912 "raid_level": "raid5f", 00:25:08.912 "superblock": false, 00:25:08.912 "num_base_bdevs": 4, 00:25:08.912 "num_base_bdevs_discovered": 3, 00:25:08.912 "num_base_bdevs_operational": 3, 00:25:08.912 "base_bdevs_list": [ 00:25:08.912 { 00:25:08.912 "name": null, 00:25:08.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:08.912 "is_configured": false, 00:25:08.912 "data_offset": 0, 00:25:08.912 "data_size": 65536 00:25:08.912 }, 00:25:08.912 { 00:25:08.912 "name": "BaseBdev2", 00:25:08.912 "uuid": "5259241f-ec03-4be8-8f43-b854b4b7c915", 00:25:08.912 "is_configured": true, 00:25:08.912 "data_offset": 0, 00:25:08.913 "data_size": 65536 00:25:08.913 }, 00:25:08.913 { 00:25:08.913 "name": "BaseBdev3", 00:25:08.913 "uuid": "ae50b943-02a5-426d-8f8e-de21242bf3ae", 00:25:08.913 "is_configured": true, 00:25:08.913 "data_offset": 0, 00:25:08.913 "data_size": 65536 00:25:08.913 }, 00:25:08.913 { 00:25:08.913 "name": "BaseBdev4", 00:25:08.913 "uuid": "21fb3133-e4c7-44dc-a3e6-ac4f90b28a3a", 00:25:08.913 "is_configured": true, 00:25:08.913 "data_offset": 0, 00:25:08.913 "data_size": 65536 00:25:08.913 } 00:25:08.913 ] 00:25:08.913 }' 00:25:08.913 07:25:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:08.913 07:25:42 -- common/autotest_common.sh@10 -- # set +x 00:25:09.482 07:25:42 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:09.482 07:25:42 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:09.482 07:25:42 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:09.482 07:25:42 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:09.482 07:25:42 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:09.482 07:25:42 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:09.482 07:25:42 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:09.482 07:25:43 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:09.482 "name": "raid_bdev1", 00:25:09.482 "uuid": "9b63a3d3-aad4-4320-8813-78de9ea8e52a", 00:25:09.482 "strip_size_kb": 64, 00:25:09.482 "state": "online", 00:25:09.482 "raid_level": "raid5f", 00:25:09.482 "superblock": false, 00:25:09.482 "num_base_bdevs": 4, 00:25:09.482 "num_base_bdevs_discovered": 3, 00:25:09.482 "num_base_bdevs_operational": 3, 00:25:09.482 "base_bdevs_list": [ 00:25:09.482 { 00:25:09.482 "name": null, 00:25:09.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:09.482 "is_configured": false, 00:25:09.482 "data_offset": 0, 00:25:09.482 "data_size": 65536 00:25:09.482 }, 00:25:09.482 { 00:25:09.482 "name": "BaseBdev2", 00:25:09.482 "uuid": "5259241f-ec03-4be8-8f43-b854b4b7c915", 00:25:09.482 "is_configured": true, 00:25:09.482 "data_offset": 0, 00:25:09.482 "data_size": 65536 00:25:09.482 }, 00:25:09.482 { 00:25:09.482 "name": "BaseBdev3", 00:25:09.482 "uuid": "ae50b943-02a5-426d-8f8e-de21242bf3ae", 00:25:09.482 "is_configured": true, 00:25:09.482 "data_offset": 0, 00:25:09.482 "data_size": 65536 00:25:09.482 }, 00:25:09.482 { 00:25:09.482 "name": "BaseBdev4", 00:25:09.482 "uuid": "21fb3133-e4c7-44dc-a3e6-ac4f90b28a3a", 00:25:09.482 "is_configured": true, 00:25:09.482 "data_offset": 0, 00:25:09.482 "data_size": 65536 00:25:09.482 } 00:25:09.482 ] 00:25:09.482 }' 00:25:09.482 07:25:43 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:09.741 07:25:43 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:09.741 07:25:43 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:09.741 07:25:43 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:09.741 07:25:43 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:10.001 [2024-02-13 07:25:43.470370] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:10.001 [2024-02-13 07:25:43.470422] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:10.001 [2024-02-13 07:25:43.481564] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002d3c0 00:25:10.001 [2024-02-13 07:25:43.489629] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:10.001 07:25:43 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:10.938 07:25:44 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:10.938 07:25:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:10.938 07:25:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:10.938 07:25:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:10.938 07:25:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:10.938 07:25:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:10.938 07:25:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.198 07:25:44 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:11.198 "name": "raid_bdev1", 00:25:11.198 "uuid": "9b63a3d3-aad4-4320-8813-78de9ea8e52a", 00:25:11.198 "strip_size_kb": 64, 00:25:11.198 "state": "online", 00:25:11.198 "raid_level": "raid5f", 00:25:11.198 "superblock": false, 00:25:11.198 "num_base_bdevs": 4, 00:25:11.198 "num_base_bdevs_discovered": 4, 00:25:11.198 "num_base_bdevs_operational": 4, 00:25:11.198 "process": { 00:25:11.198 "type": "rebuild", 00:25:11.198 "target": "spare", 00:25:11.198 "progress": { 00:25:11.198 "blocks": 23040, 00:25:11.198 "percent": 11 00:25:11.198 } 00:25:11.198 }, 00:25:11.198 "base_bdevs_list": [ 00:25:11.198 { 00:25:11.198 "name": "spare", 00:25:11.198 "uuid": "15e6199b-0be5-558e-a47d-d99a40d56fda", 00:25:11.198 "is_configured": true, 00:25:11.198 "data_offset": 0, 00:25:11.198 "data_size": 65536 00:25:11.198 }, 00:25:11.198 { 00:25:11.198 "name": "BaseBdev2", 00:25:11.198 "uuid": "5259241f-ec03-4be8-8f43-b854b4b7c915", 00:25:11.198 "is_configured": true, 00:25:11.198 "data_offset": 0, 00:25:11.198 "data_size": 65536 00:25:11.198 }, 00:25:11.198 { 00:25:11.198 "name": "BaseBdev3", 00:25:11.198 "uuid": "ae50b943-02a5-426d-8f8e-de21242bf3ae", 00:25:11.198 "is_configured": true, 00:25:11.198 "data_offset": 0, 00:25:11.198 "data_size": 65536 00:25:11.198 }, 00:25:11.198 { 00:25:11.198 "name": "BaseBdev4", 00:25:11.198 "uuid": "21fb3133-e4c7-44dc-a3e6-ac4f90b28a3a", 00:25:11.198 "is_configured": true, 00:25:11.198 "data_offset": 0, 00:25:11.198 "data_size": 65536 00:25:11.198 } 00:25:11.198 ] 00:25:11.198 }' 00:25:11.198 07:25:44 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:11.198 07:25:44 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:11.198 07:25:44 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:11.198 07:25:44 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:11.198 07:25:44 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:25:11.198 07:25:44 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:11.198 07:25:44 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:11.198 07:25:44 -- bdev/bdev_raid.sh@657 -- # local timeout=725 00:25:11.198 07:25:44 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:11.198 07:25:44 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:11.198 07:25:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:11.198 07:25:44 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:11.198 07:25:44 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:11.198 07:25:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:11.198 07:25:44 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:11.198 07:25:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:11.457 07:25:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:11.457 "name": "raid_bdev1", 00:25:11.457 "uuid": "9b63a3d3-aad4-4320-8813-78de9ea8e52a", 00:25:11.457 "strip_size_kb": 64, 00:25:11.457 "state": "online", 00:25:11.457 "raid_level": "raid5f", 00:25:11.457 "superblock": false, 00:25:11.457 "num_base_bdevs": 4, 00:25:11.457 "num_base_bdevs_discovered": 4, 00:25:11.457 "num_base_bdevs_operational": 4, 00:25:11.457 "process": { 00:25:11.458 "type": "rebuild", 00:25:11.458 "target": "spare", 00:25:11.458 "progress": { 00:25:11.458 "blocks": 28800, 00:25:11.458 "percent": 14 00:25:11.458 } 00:25:11.458 }, 00:25:11.458 "base_bdevs_list": [ 00:25:11.458 { 00:25:11.458 "name": "spare", 00:25:11.458 "uuid": "15e6199b-0be5-558e-a47d-d99a40d56fda", 00:25:11.458 "is_configured": true, 00:25:11.458 "data_offset": 0, 00:25:11.458 "data_size": 65536 00:25:11.458 }, 00:25:11.458 { 00:25:11.458 "name": "BaseBdev2", 00:25:11.458 "uuid": "5259241f-ec03-4be8-8f43-b854b4b7c915", 00:25:11.458 "is_configured": true, 00:25:11.458 "data_offset": 0, 00:25:11.458 "data_size": 65536 00:25:11.458 }, 00:25:11.458 { 00:25:11.458 "name": "BaseBdev3", 00:25:11.458 "uuid": "ae50b943-02a5-426d-8f8e-de21242bf3ae", 00:25:11.458 "is_configured": true, 00:25:11.458 "data_offset": 0, 00:25:11.458 "data_size": 65536 00:25:11.458 }, 00:25:11.458 { 00:25:11.458 "name": "BaseBdev4", 00:25:11.458 "uuid": "21fb3133-e4c7-44dc-a3e6-ac4f90b28a3a", 00:25:11.458 "is_configured": true, 00:25:11.458 "data_offset": 0, 00:25:11.458 "data_size": 65536 00:25:11.458 } 00:25:11.458 ] 00:25:11.458 }' 00:25:11.458 07:25:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:11.458 07:25:45 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:11.458 07:25:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:11.458 07:25:45 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:11.458 07:25:45 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:12.837 07:25:46 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:12.837 07:25:46 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:12.837 07:25:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:12.837 07:25:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:12.837 07:25:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:12.837 07:25:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:12.837 07:25:46 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:12.837 07:25:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:12.837 07:25:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:12.837 "name": "raid_bdev1", 00:25:12.837 "uuid": "9b63a3d3-aad4-4320-8813-78de9ea8e52a", 00:25:12.837 "strip_size_kb": 64, 00:25:12.837 "state": "online", 00:25:12.837 "raid_level": "raid5f", 00:25:12.837 "superblock": false, 00:25:12.837 "num_base_bdevs": 4, 00:25:12.837 "num_base_bdevs_discovered": 4, 00:25:12.837 "num_base_bdevs_operational": 4, 00:25:12.837 "process": { 00:25:12.837 "type": "rebuild", 00:25:12.837 "target": "spare", 00:25:12.837 "progress": { 00:25:12.837 "blocks": 53760, 00:25:12.837 "percent": 27 00:25:12.837 } 00:25:12.837 }, 00:25:12.837 "base_bdevs_list": [ 00:25:12.837 { 00:25:12.837 "name": "spare", 00:25:12.837 "uuid": "15e6199b-0be5-558e-a47d-d99a40d56fda", 00:25:12.837 "is_configured": true, 00:25:12.837 "data_offset": 0, 00:25:12.837 "data_size": 65536 00:25:12.837 }, 00:25:12.837 { 00:25:12.837 "name": "BaseBdev2", 00:25:12.837 "uuid": "5259241f-ec03-4be8-8f43-b854b4b7c915", 00:25:12.837 "is_configured": true, 00:25:12.837 "data_offset": 0, 00:25:12.837 "data_size": 65536 00:25:12.837 }, 00:25:12.837 { 00:25:12.837 "name": "BaseBdev3", 00:25:12.837 "uuid": "ae50b943-02a5-426d-8f8e-de21242bf3ae", 00:25:12.837 "is_configured": true, 00:25:12.837 "data_offset": 0, 00:25:12.837 "data_size": 65536 00:25:12.837 }, 00:25:12.837 { 00:25:12.837 "name": "BaseBdev4", 00:25:12.837 "uuid": "21fb3133-e4c7-44dc-a3e6-ac4f90b28a3a", 00:25:12.837 "is_configured": true, 00:25:12.837 "data_offset": 0, 00:25:12.837 "data_size": 65536 00:25:12.837 } 00:25:12.837 ] 00:25:12.837 }' 00:25:12.837 07:25:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:12.837 07:25:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:12.837 07:25:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:12.837 07:25:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:12.837 07:25:46 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:13.805 07:25:47 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:13.805 07:25:47 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:13.805 07:25:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:13.805 07:25:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:13.805 07:25:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:13.805 07:25:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:13.805 07:25:47 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:13.805 07:25:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:14.064 07:25:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:14.065 "name": "raid_bdev1", 00:25:14.065 "uuid": "9b63a3d3-aad4-4320-8813-78de9ea8e52a", 00:25:14.065 "strip_size_kb": 64, 00:25:14.065 "state": "online", 00:25:14.065 "raid_level": "raid5f", 00:25:14.065 "superblock": false, 00:25:14.065 "num_base_bdevs": 4, 00:25:14.065 "num_base_bdevs_discovered": 4, 00:25:14.065 "num_base_bdevs_operational": 4, 00:25:14.065 "process": { 00:25:14.065 "type": "rebuild", 00:25:14.065 "target": "spare", 00:25:14.065 "progress": { 00:25:14.065 "blocks": 78720, 00:25:14.065 "percent": 40 00:25:14.065 } 00:25:14.065 }, 00:25:14.065 "base_bdevs_list": [ 00:25:14.065 { 00:25:14.065 "name": "spare", 00:25:14.065 "uuid": "15e6199b-0be5-558e-a47d-d99a40d56fda", 00:25:14.065 "is_configured": true, 00:25:14.065 "data_offset": 0, 00:25:14.065 "data_size": 65536 00:25:14.065 }, 00:25:14.065 { 00:25:14.065 "name": "BaseBdev2", 00:25:14.065 "uuid": "5259241f-ec03-4be8-8f43-b854b4b7c915", 00:25:14.065 "is_configured": true, 00:25:14.065 "data_offset": 0, 00:25:14.065 "data_size": 65536 00:25:14.065 }, 00:25:14.065 { 00:25:14.065 "name": "BaseBdev3", 00:25:14.065 "uuid": "ae50b943-02a5-426d-8f8e-de21242bf3ae", 00:25:14.065 "is_configured": true, 00:25:14.065 "data_offset": 0, 00:25:14.065 "data_size": 65536 00:25:14.065 }, 00:25:14.065 { 00:25:14.065 "name": "BaseBdev4", 00:25:14.065 "uuid": "21fb3133-e4c7-44dc-a3e6-ac4f90b28a3a", 00:25:14.065 "is_configured": true, 00:25:14.065 "data_offset": 0, 00:25:14.065 "data_size": 65536 00:25:14.065 } 00:25:14.065 ] 00:25:14.065 }' 00:25:14.065 07:25:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:14.065 07:25:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:14.065 07:25:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:14.324 07:25:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:14.324 07:25:47 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:15.261 07:25:48 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:15.261 07:25:48 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:15.261 07:25:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:15.261 07:25:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:15.261 07:25:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:15.261 07:25:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:15.261 07:25:48 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:15.261 07:25:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:15.521 07:25:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:15.521 "name": "raid_bdev1", 00:25:15.521 "uuid": "9b63a3d3-aad4-4320-8813-78de9ea8e52a", 00:25:15.521 "strip_size_kb": 64, 00:25:15.521 "state": "online", 00:25:15.521 "raid_level": "raid5f", 00:25:15.521 "superblock": false, 00:25:15.521 "num_base_bdevs": 4, 00:25:15.521 "num_base_bdevs_discovered": 4, 00:25:15.521 "num_base_bdevs_operational": 4, 00:25:15.521 "process": { 00:25:15.521 "type": "rebuild", 00:25:15.521 "target": "spare", 00:25:15.521 "progress": { 00:25:15.521 "blocks": 103680, 00:25:15.521 "percent": 52 00:25:15.521 } 00:25:15.521 }, 00:25:15.521 "base_bdevs_list": [ 00:25:15.521 { 00:25:15.521 "name": "spare", 00:25:15.521 "uuid": "15e6199b-0be5-558e-a47d-d99a40d56fda", 00:25:15.521 "is_configured": true, 00:25:15.521 "data_offset": 0, 00:25:15.521 "data_size": 65536 00:25:15.521 }, 00:25:15.521 { 00:25:15.521 "name": "BaseBdev2", 00:25:15.521 "uuid": "5259241f-ec03-4be8-8f43-b854b4b7c915", 00:25:15.521 "is_configured": true, 00:25:15.521 "data_offset": 0, 00:25:15.521 "data_size": 65536 00:25:15.521 }, 00:25:15.521 { 00:25:15.521 "name": "BaseBdev3", 00:25:15.521 "uuid": "ae50b943-02a5-426d-8f8e-de21242bf3ae", 00:25:15.521 "is_configured": true, 00:25:15.521 "data_offset": 0, 00:25:15.521 "data_size": 65536 00:25:15.521 }, 00:25:15.521 { 00:25:15.521 "name": "BaseBdev4", 00:25:15.521 "uuid": "21fb3133-e4c7-44dc-a3e6-ac4f90b28a3a", 00:25:15.521 "is_configured": true, 00:25:15.521 "data_offset": 0, 00:25:15.521 "data_size": 65536 00:25:15.521 } 00:25:15.521 ] 00:25:15.521 }' 00:25:15.521 07:25:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:15.521 07:25:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:15.521 07:25:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:15.521 07:25:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:15.521 07:25:49 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:16.458 07:25:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:16.458 07:25:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:16.458 07:25:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:16.458 07:25:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:16.458 07:25:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:16.458 07:25:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:16.458 07:25:50 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:16.458 07:25:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:16.717 07:25:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:16.717 "name": "raid_bdev1", 00:25:16.717 "uuid": "9b63a3d3-aad4-4320-8813-78de9ea8e52a", 00:25:16.717 "strip_size_kb": 64, 00:25:16.717 "state": "online", 00:25:16.717 "raid_level": "raid5f", 00:25:16.717 "superblock": false, 00:25:16.717 "num_base_bdevs": 4, 00:25:16.717 "num_base_bdevs_discovered": 4, 00:25:16.717 "num_base_bdevs_operational": 4, 00:25:16.717 "process": { 00:25:16.717 "type": "rebuild", 00:25:16.717 "target": "spare", 00:25:16.717 "progress": { 00:25:16.717 "blocks": 130560, 00:25:16.717 "percent": 66 00:25:16.717 } 00:25:16.717 }, 00:25:16.717 "base_bdevs_list": [ 00:25:16.717 { 00:25:16.717 "name": "spare", 00:25:16.717 "uuid": "15e6199b-0be5-558e-a47d-d99a40d56fda", 00:25:16.717 "is_configured": true, 00:25:16.717 "data_offset": 0, 00:25:16.717 "data_size": 65536 00:25:16.717 }, 00:25:16.717 { 00:25:16.717 "name": "BaseBdev2", 00:25:16.717 "uuid": "5259241f-ec03-4be8-8f43-b854b4b7c915", 00:25:16.717 "is_configured": true, 00:25:16.717 "data_offset": 0, 00:25:16.717 "data_size": 65536 00:25:16.717 }, 00:25:16.717 { 00:25:16.717 "name": "BaseBdev3", 00:25:16.717 "uuid": "ae50b943-02a5-426d-8f8e-de21242bf3ae", 00:25:16.717 "is_configured": true, 00:25:16.717 "data_offset": 0, 00:25:16.717 "data_size": 65536 00:25:16.717 }, 00:25:16.717 { 00:25:16.717 "name": "BaseBdev4", 00:25:16.718 "uuid": "21fb3133-e4c7-44dc-a3e6-ac4f90b28a3a", 00:25:16.718 "is_configured": true, 00:25:16.718 "data_offset": 0, 00:25:16.718 "data_size": 65536 00:25:16.718 } 00:25:16.718 ] 00:25:16.718 }' 00:25:16.718 07:25:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:16.977 07:25:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:16.977 07:25:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:16.977 07:25:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:16.977 07:25:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:17.913 07:25:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:17.913 07:25:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:17.913 07:25:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:17.913 07:25:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:17.913 07:25:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:17.913 07:25:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:17.913 07:25:51 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:17.913 07:25:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:18.172 07:25:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:18.172 "name": "raid_bdev1", 00:25:18.172 "uuid": "9b63a3d3-aad4-4320-8813-78de9ea8e52a", 00:25:18.172 "strip_size_kb": 64, 00:25:18.172 "state": "online", 00:25:18.172 "raid_level": "raid5f", 00:25:18.172 "superblock": false, 00:25:18.172 "num_base_bdevs": 4, 00:25:18.172 "num_base_bdevs_discovered": 4, 00:25:18.172 "num_base_bdevs_operational": 4, 00:25:18.172 "process": { 00:25:18.172 "type": "rebuild", 00:25:18.172 "target": "spare", 00:25:18.172 "progress": { 00:25:18.172 "blocks": 155520, 00:25:18.172 "percent": 79 00:25:18.172 } 00:25:18.172 }, 00:25:18.172 "base_bdevs_list": [ 00:25:18.172 { 00:25:18.172 "name": "spare", 00:25:18.172 "uuid": "15e6199b-0be5-558e-a47d-d99a40d56fda", 00:25:18.172 "is_configured": true, 00:25:18.172 "data_offset": 0, 00:25:18.172 "data_size": 65536 00:25:18.172 }, 00:25:18.172 { 00:25:18.172 "name": "BaseBdev2", 00:25:18.172 "uuid": "5259241f-ec03-4be8-8f43-b854b4b7c915", 00:25:18.172 "is_configured": true, 00:25:18.172 "data_offset": 0, 00:25:18.172 "data_size": 65536 00:25:18.172 }, 00:25:18.172 { 00:25:18.172 "name": "BaseBdev3", 00:25:18.172 "uuid": "ae50b943-02a5-426d-8f8e-de21242bf3ae", 00:25:18.172 "is_configured": true, 00:25:18.172 "data_offset": 0, 00:25:18.172 "data_size": 65536 00:25:18.172 }, 00:25:18.172 { 00:25:18.172 "name": "BaseBdev4", 00:25:18.172 "uuid": "21fb3133-e4c7-44dc-a3e6-ac4f90b28a3a", 00:25:18.172 "is_configured": true, 00:25:18.172 "data_offset": 0, 00:25:18.172 "data_size": 65536 00:25:18.172 } 00:25:18.172 ] 00:25:18.172 }' 00:25:18.172 07:25:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:18.172 07:25:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:18.172 07:25:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:18.172 07:25:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:18.172 07:25:51 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:19.549 07:25:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:19.549 07:25:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:19.549 07:25:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:19.549 07:25:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:19.549 07:25:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:19.549 07:25:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:19.549 07:25:52 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:19.549 07:25:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:19.549 07:25:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:19.549 "name": "raid_bdev1", 00:25:19.549 "uuid": "9b63a3d3-aad4-4320-8813-78de9ea8e52a", 00:25:19.549 "strip_size_kb": 64, 00:25:19.549 "state": "online", 00:25:19.549 "raid_level": "raid5f", 00:25:19.549 "superblock": false, 00:25:19.549 "num_base_bdevs": 4, 00:25:19.549 "num_base_bdevs_discovered": 4, 00:25:19.549 "num_base_bdevs_operational": 4, 00:25:19.549 "process": { 00:25:19.549 "type": "rebuild", 00:25:19.549 "target": "spare", 00:25:19.549 "progress": { 00:25:19.549 "blocks": 182400, 00:25:19.549 "percent": 92 00:25:19.549 } 00:25:19.549 }, 00:25:19.549 "base_bdevs_list": [ 00:25:19.549 { 00:25:19.549 "name": "spare", 00:25:19.549 "uuid": "15e6199b-0be5-558e-a47d-d99a40d56fda", 00:25:19.549 "is_configured": true, 00:25:19.549 "data_offset": 0, 00:25:19.549 "data_size": 65536 00:25:19.549 }, 00:25:19.549 { 00:25:19.549 "name": "BaseBdev2", 00:25:19.549 "uuid": "5259241f-ec03-4be8-8f43-b854b4b7c915", 00:25:19.549 "is_configured": true, 00:25:19.549 "data_offset": 0, 00:25:19.549 "data_size": 65536 00:25:19.549 }, 00:25:19.549 { 00:25:19.549 "name": "BaseBdev3", 00:25:19.549 "uuid": "ae50b943-02a5-426d-8f8e-de21242bf3ae", 00:25:19.549 "is_configured": true, 00:25:19.549 "data_offset": 0, 00:25:19.549 "data_size": 65536 00:25:19.549 }, 00:25:19.549 { 00:25:19.549 "name": "BaseBdev4", 00:25:19.549 "uuid": "21fb3133-e4c7-44dc-a3e6-ac4f90b28a3a", 00:25:19.549 "is_configured": true, 00:25:19.549 "data_offset": 0, 00:25:19.549 "data_size": 65536 00:25:19.549 } 00:25:19.549 ] 00:25:19.549 }' 00:25:19.549 07:25:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:19.550 07:25:53 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:19.550 07:25:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:19.550 07:25:53 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:19.550 07:25:53 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:20.486 [2024-02-13 07:25:53.853516] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:20.486 [2024-02-13 07:25:53.853589] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:20.486 [2024-02-13 07:25:53.853671] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:20.745 07:25:54 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:20.745 07:25:54 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:20.745 07:25:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:20.745 07:25:54 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:20.745 07:25:54 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:20.745 07:25:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:20.745 07:25:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:20.745 07:25:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:20.745 07:25:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:20.745 "name": "raid_bdev1", 00:25:20.745 "uuid": "9b63a3d3-aad4-4320-8813-78de9ea8e52a", 00:25:20.745 "strip_size_kb": 64, 00:25:20.745 "state": "online", 00:25:20.745 "raid_level": "raid5f", 00:25:20.745 "superblock": false, 00:25:20.745 "num_base_bdevs": 4, 00:25:20.745 "num_base_bdevs_discovered": 4, 00:25:20.745 "num_base_bdevs_operational": 4, 00:25:20.745 "base_bdevs_list": [ 00:25:20.745 { 00:25:20.745 "name": "spare", 00:25:20.745 "uuid": "15e6199b-0be5-558e-a47d-d99a40d56fda", 00:25:20.745 "is_configured": true, 00:25:20.745 "data_offset": 0, 00:25:20.745 "data_size": 65536 00:25:20.745 }, 00:25:20.745 { 00:25:20.745 "name": "BaseBdev2", 00:25:20.745 "uuid": "5259241f-ec03-4be8-8f43-b854b4b7c915", 00:25:20.745 "is_configured": true, 00:25:20.745 "data_offset": 0, 00:25:20.745 "data_size": 65536 00:25:20.745 }, 00:25:20.745 { 00:25:20.745 "name": "BaseBdev3", 00:25:20.745 "uuid": "ae50b943-02a5-426d-8f8e-de21242bf3ae", 00:25:20.745 "is_configured": true, 00:25:20.745 "data_offset": 0, 00:25:20.745 "data_size": 65536 00:25:20.745 }, 00:25:20.745 { 00:25:20.745 "name": "BaseBdev4", 00:25:20.745 "uuid": "21fb3133-e4c7-44dc-a3e6-ac4f90b28a3a", 00:25:20.745 "is_configured": true, 00:25:20.745 "data_offset": 0, 00:25:20.745 "data_size": 65536 00:25:20.745 } 00:25:20.745 ] 00:25:20.745 }' 00:25:20.745 07:25:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:21.004 07:25:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:21.004 07:25:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:21.004 07:25:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:21.004 07:25:54 -- bdev/bdev_raid.sh@660 -- # break 00:25:21.004 07:25:54 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:21.004 07:25:54 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:21.004 07:25:54 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:21.004 07:25:54 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:21.004 07:25:54 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:21.004 07:25:54 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.004 07:25:54 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.263 07:25:54 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:21.263 "name": "raid_bdev1", 00:25:21.263 "uuid": "9b63a3d3-aad4-4320-8813-78de9ea8e52a", 00:25:21.263 "strip_size_kb": 64, 00:25:21.263 "state": "online", 00:25:21.263 "raid_level": "raid5f", 00:25:21.263 "superblock": false, 00:25:21.263 "num_base_bdevs": 4, 00:25:21.263 "num_base_bdevs_discovered": 4, 00:25:21.263 "num_base_bdevs_operational": 4, 00:25:21.263 "base_bdevs_list": [ 00:25:21.263 { 00:25:21.263 "name": "spare", 00:25:21.263 "uuid": "15e6199b-0be5-558e-a47d-d99a40d56fda", 00:25:21.263 "is_configured": true, 00:25:21.263 "data_offset": 0, 00:25:21.263 "data_size": 65536 00:25:21.263 }, 00:25:21.263 { 00:25:21.263 "name": "BaseBdev2", 00:25:21.263 "uuid": "5259241f-ec03-4be8-8f43-b854b4b7c915", 00:25:21.263 "is_configured": true, 00:25:21.263 "data_offset": 0, 00:25:21.263 "data_size": 65536 00:25:21.263 }, 00:25:21.263 { 00:25:21.263 "name": "BaseBdev3", 00:25:21.263 "uuid": "ae50b943-02a5-426d-8f8e-de21242bf3ae", 00:25:21.263 "is_configured": true, 00:25:21.263 "data_offset": 0, 00:25:21.263 "data_size": 65536 00:25:21.263 }, 00:25:21.263 { 00:25:21.263 "name": "BaseBdev4", 00:25:21.263 "uuid": "21fb3133-e4c7-44dc-a3e6-ac4f90b28a3a", 00:25:21.263 "is_configured": true, 00:25:21.263 "data_offset": 0, 00:25:21.263 "data_size": 65536 00:25:21.263 } 00:25:21.263 ] 00:25:21.263 }' 00:25:21.263 07:25:54 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:21.263 07:25:54 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:21.263 07:25:54 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:21.263 07:25:54 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:21.263 07:25:54 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:21.263 07:25:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:21.263 07:25:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:21.263 07:25:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:21.263 07:25:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:21.263 07:25:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:21.263 07:25:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:21.263 07:25:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:21.263 07:25:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:21.263 07:25:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:21.263 07:25:54 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:21.263 07:25:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:21.522 07:25:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:21.522 "name": "raid_bdev1", 00:25:21.522 "uuid": "9b63a3d3-aad4-4320-8813-78de9ea8e52a", 00:25:21.522 "strip_size_kb": 64, 00:25:21.522 "state": "online", 00:25:21.522 "raid_level": "raid5f", 00:25:21.522 "superblock": false, 00:25:21.522 "num_base_bdevs": 4, 00:25:21.522 "num_base_bdevs_discovered": 4, 00:25:21.523 "num_base_bdevs_operational": 4, 00:25:21.523 "base_bdevs_list": [ 00:25:21.523 { 00:25:21.523 "name": "spare", 00:25:21.523 "uuid": "15e6199b-0be5-558e-a47d-d99a40d56fda", 00:25:21.523 "is_configured": true, 00:25:21.523 "data_offset": 0, 00:25:21.523 "data_size": 65536 00:25:21.523 }, 00:25:21.523 { 00:25:21.523 "name": "BaseBdev2", 00:25:21.523 "uuid": "5259241f-ec03-4be8-8f43-b854b4b7c915", 00:25:21.523 "is_configured": true, 00:25:21.523 "data_offset": 0, 00:25:21.523 "data_size": 65536 00:25:21.523 }, 00:25:21.523 { 00:25:21.523 "name": "BaseBdev3", 00:25:21.523 "uuid": "ae50b943-02a5-426d-8f8e-de21242bf3ae", 00:25:21.523 "is_configured": true, 00:25:21.523 "data_offset": 0, 00:25:21.523 "data_size": 65536 00:25:21.523 }, 00:25:21.523 { 00:25:21.523 "name": "BaseBdev4", 00:25:21.523 "uuid": "21fb3133-e4c7-44dc-a3e6-ac4f90b28a3a", 00:25:21.523 "is_configured": true, 00:25:21.523 "data_offset": 0, 00:25:21.523 "data_size": 65536 00:25:21.523 } 00:25:21.523 ] 00:25:21.523 }' 00:25:21.523 07:25:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:21.523 07:25:55 -- common/autotest_common.sh@10 -- # set +x 00:25:22.090 07:25:55 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:22.349 [2024-02-13 07:25:55.828085] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:22.349 [2024-02-13 07:25:55.828117] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:22.349 [2024-02-13 07:25:55.828205] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:22.349 [2024-02-13 07:25:55.828288] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:22.349 [2024-02-13 07:25:55.828300] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x616000009080 name raid_bdev1, state offline 00:25:22.349 07:25:55 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:22.349 07:25:55 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:22.349 07:25:56 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:22.349 07:25:56 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:22.349 07:25:56 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:22.349 07:25:56 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:22.349 07:25:56 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:22.349 07:25:56 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:22.349 07:25:56 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:22.349 07:25:56 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:22.349 07:25:56 -- bdev/nbd_common.sh@12 -- # local i 00:25:22.349 07:25:56 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:22.349 07:25:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:22.349 07:25:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:22.607 /dev/nbd0 00:25:22.607 07:25:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:22.607 07:25:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:22.607 07:25:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:25:22.607 07:25:56 -- common/autotest_common.sh@855 -- # local i 00:25:22.607 07:25:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:22.607 07:25:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:22.607 07:25:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:25:22.865 07:25:56 -- common/autotest_common.sh@859 -- # break 00:25:22.865 07:25:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:22.865 07:25:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:22.865 07:25:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:22.865 1+0 records in 00:25:22.865 1+0 records out 00:25:22.865 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479413 s, 8.5 MB/s 00:25:22.865 07:25:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:22.865 07:25:56 -- common/autotest_common.sh@872 -- # size=4096 00:25:22.865 07:25:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:22.865 07:25:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:22.865 07:25:56 -- common/autotest_common.sh@875 -- # return 0 00:25:22.865 07:25:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:22.865 07:25:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:22.865 07:25:56 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:22.865 /dev/nbd1 00:25:22.865 07:25:56 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:22.865 07:25:56 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:22.865 07:25:56 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:25:22.865 07:25:56 -- common/autotest_common.sh@855 -- # local i 00:25:22.866 07:25:56 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:22.866 07:25:56 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:22.866 07:25:56 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:25:22.866 07:25:56 -- common/autotest_common.sh@859 -- # break 00:25:22.866 07:25:56 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:22.866 07:25:56 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:22.866 07:25:56 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:22.866 1+0 records in 00:25:22.866 1+0 records out 00:25:22.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360298 s, 11.4 MB/s 00:25:23.123 07:25:56 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:23.123 07:25:56 -- common/autotest_common.sh@872 -- # size=4096 00:25:23.123 07:25:56 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:23.123 07:25:56 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:23.123 07:25:56 -- common/autotest_common.sh@875 -- # return 0 00:25:23.123 07:25:56 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:23.123 07:25:56 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:23.123 07:25:56 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:25:23.123 07:25:56 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:23.123 07:25:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:23.123 07:25:56 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:23.124 07:25:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:23.124 07:25:56 -- bdev/nbd_common.sh@51 -- # local i 00:25:23.124 07:25:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:23.124 07:25:56 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:23.381 07:25:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:23.381 07:25:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:23.381 07:25:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:23.381 07:25:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:23.381 07:25:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:23.381 07:25:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:23.381 07:25:57 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:23.639 07:25:57 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:23.639 07:25:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:23.639 07:25:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:23.639 07:25:57 -- bdev/nbd_common.sh@41 -- # break 00:25:23.639 07:25:57 -- bdev/nbd_common.sh@45 -- # return 0 00:25:23.639 07:25:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:23.639 07:25:57 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:23.897 07:25:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:23.897 07:25:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:23.897 07:25:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:23.897 07:25:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:23.897 07:25:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:23.897 07:25:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:23.897 07:25:57 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:23.897 07:25:57 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:23.897 07:25:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:23.897 07:25:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:23.897 07:25:57 -- bdev/nbd_common.sh@41 -- # break 00:25:23.897 07:25:57 -- bdev/nbd_common.sh@45 -- # return 0 00:25:23.897 07:25:57 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:25:23.897 07:25:57 -- bdev/bdev_raid.sh@709 -- # killprocess 136550 00:25:23.897 07:25:57 -- common/autotest_common.sh@924 -- # '[' -z 136550 ']' 00:25:23.897 07:25:57 -- common/autotest_common.sh@928 -- # kill -0 136550 00:25:23.897 07:25:57 -- common/autotest_common.sh@929 -- # uname 00:25:23.897 07:25:57 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:25:23.897 07:25:57 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 136550 00:25:23.897 07:25:57 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:25:23.897 07:25:57 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:25:23.897 07:25:57 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 136550' 00:25:23.898 killing process with pid 136550 00:25:23.898 07:25:57 -- common/autotest_common.sh@943 -- # kill 136550 00:25:23.898 Received shutdown signal, test time was about 60.000000 seconds 00:25:23.898 00:25:23.898 Latency(us) 00:25:23.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.898 =================================================================================================================== 00:25:23.898 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:23.898 07:25:57 -- common/autotest_common.sh@948 -- # wait 136550 00:25:23.898 [2024-02-13 07:25:57.494855] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:24.157 [2024-02-13 07:25:57.822310] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:25.116 ************************************ 00:25:25.116 END TEST raid5f_rebuild_test 00:25:25.116 ************************************ 00:25:25.116 07:25:58 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:25.116 00:25:25.116 real 0m24.658s 00:25:25.116 user 0m36.021s 00:25:25.116 sys 0m2.379s 00:25:25.116 07:25:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:25.116 07:25:58 -- common/autotest_common.sh@10 -- # set +x 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@749 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false 00:25:25.374 07:25:58 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:25:25.374 07:25:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:25:25.374 07:25:58 -- common/autotest_common.sh@10 -- # set +x 00:25:25.374 ************************************ 00:25:25.374 START TEST raid5f_rebuild_test_sb 00:25:25.374 ************************************ 00:25:25.374 07:25:58 -- common/autotest_common.sh@1102 -- # raid_rebuild_test raid5f 4 true false 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid5f 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@521 -- # base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done)) 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev1 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev2 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev3 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@521 -- # echo BaseBdev4 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@528 -- # '[' raid5f '!=' raid1 ']' 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@529 -- # '[' false = true ']' 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@533 -- # strip_size=64 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@534 -- # create_arg+=' -z 64' 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@544 -- # raid_pid=137202 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@545 -- # waitforlisten 137202 /var/tmp/spdk-raid.sock 00:25:25.374 07:25:58 -- bdev/bdev_raid.sh@543 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:25:25.374 07:25:58 -- common/autotest_common.sh@817 -- # '[' -z 137202 ']' 00:25:25.374 07:25:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:25:25.374 07:25:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:25.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:25:25.374 07:25:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:25:25.374 07:25:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:25.374 07:25:58 -- common/autotest_common.sh@10 -- # set +x 00:25:25.374 [2024-02-13 07:25:58.912468] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:25:25.374 I/O size of 3145728 is greater than zero copy threshold (65536). 00:25:25.374 Zero copy mechanism will not be used. 00:25:25.374 [2024-02-13 07:25:58.912649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137202 ] 00:25:25.632 [2024-02-13 07:25:59.078870] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.632 [2024-02-13 07:25:59.248234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.891 [2024-02-13 07:25:59.419988] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:26.149 07:25:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:26.149 07:25:59 -- common/autotest_common.sh@850 -- # return 0 00:25:26.149 07:25:59 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:26.149 07:25:59 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:26.149 07:25:59 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:25:26.408 BaseBdev1_malloc 00:25:26.408 07:26:00 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:26.667 [2024-02-13 07:26:00.199884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:26.667 [2024-02-13 07:26:00.199987] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:26.667 [2024-02-13 07:26:00.200021] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:25:26.667 [2024-02-13 07:26:00.200067] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:26.667 [2024-02-13 07:26:00.202468] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:26.667 [2024-02-13 07:26:00.202518] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:26.667 BaseBdev1 00:25:26.667 07:26:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:26.667 07:26:00 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:26.667 07:26:00 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:25:26.926 BaseBdev2_malloc 00:25:26.926 07:26:00 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:27.185 [2024-02-13 07:26:00.651413] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:27.185 [2024-02-13 07:26:00.651507] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.185 [2024-02-13 07:26:00.651549] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:25:27.185 [2024-02-13 07:26:00.651594] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.185 [2024-02-13 07:26:00.653495] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.185 [2024-02-13 07:26:00.653540] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:27.185 BaseBdev2 00:25:27.185 07:26:00 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:27.185 07:26:00 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:27.185 07:26:00 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:25:27.185 BaseBdev3_malloc 00:25:27.444 07:26:00 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:27.444 [2024-02-13 07:26:01.060844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:27.444 [2024-02-13 07:26:01.060933] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.444 [2024-02-13 07:26:01.060973] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:25:27.444 [2024-02-13 07:26:01.061016] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.444 [2024-02-13 07:26:01.063017] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.444 [2024-02-13 07:26:01.063070] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:27.444 BaseBdev3 00:25:27.444 07:26:01 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:25:27.444 07:26:01 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:25:27.444 07:26:01 -- bdev/bdev_raid.sh@550 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:25:27.703 BaseBdev4_malloc 00:25:27.703 07:26:01 -- bdev/bdev_raid.sh@551 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:27.962 [2024-02-13 07:26:01.473848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:27.962 [2024-02-13 07:26:01.473959] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:27.962 [2024-02-13 07:26:01.473994] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:25:27.962 [2024-02-13 07:26:01.474039] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:27.962 [2024-02-13 07:26:01.475863] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:27.962 [2024-02-13 07:26:01.475910] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:27.962 BaseBdev4 00:25:27.962 07:26:01 -- bdev/bdev_raid.sh@558 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:25:28.221 spare_malloc 00:25:28.221 07:26:01 -- bdev/bdev_raid.sh@559 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:25:28.479 spare_delay 00:25:28.479 07:26:01 -- bdev/bdev_raid.sh@560 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:28.479 [2024-02-13 07:26:02.114203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:28.479 [2024-02-13 07:26:02.114292] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:28.479 [2024-02-13 07:26:02.114322] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:25:28.479 [2024-02-13 07:26:02.114360] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:28.479 [2024-02-13 07:26:02.116571] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:28.479 [2024-02-13 07:26:02.116628] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:28.479 spare 00:25:28.479 07:26:02 -- bdev/bdev_raid.sh@563 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid5f -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:25:28.737 [2024-02-13 07:26:02.298319] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:28.737 [2024-02-13 07:26:02.299887] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:28.737 [2024-02-13 07:26:02.299964] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:28.737 [2024-02-13 07:26:02.300020] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:28.737 [2024-02-13 07:26:02.300260] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000a880 00:25:28.737 [2024-02-13 07:26:02.300283] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:28.737 [2024-02-13 07:26:02.300391] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:25:28.737 [2024-02-13 07:26:02.305616] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000a880 00:25:28.737 [2024-02-13 07:26:02.305642] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000a880 00:25:28.737 [2024-02-13 07:26:02.305820] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:28.737 07:26:02 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:28.737 07:26:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:28.737 07:26:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:28.737 07:26:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:28.737 07:26:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:28.737 07:26:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:28.737 07:26:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:28.737 07:26:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:28.737 07:26:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:28.737 07:26:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:28.737 07:26:02 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:28.737 07:26:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:28.996 07:26:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:28.996 "name": "raid_bdev1", 00:25:28.996 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:28.996 "strip_size_kb": 64, 00:25:28.996 "state": "online", 00:25:28.996 "raid_level": "raid5f", 00:25:28.996 "superblock": true, 00:25:28.996 "num_base_bdevs": 4, 00:25:28.996 "num_base_bdevs_discovered": 4, 00:25:28.996 "num_base_bdevs_operational": 4, 00:25:28.996 "base_bdevs_list": [ 00:25:28.996 { 00:25:28.996 "name": "BaseBdev1", 00:25:28.996 "uuid": "83aa7c1d-3f47-52f7-9e87-5b0c9830f9a7", 00:25:28.996 "is_configured": true, 00:25:28.996 "data_offset": 2048, 00:25:28.996 "data_size": 63488 00:25:28.996 }, 00:25:28.996 { 00:25:28.996 "name": "BaseBdev2", 00:25:28.996 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:28.996 "is_configured": true, 00:25:28.996 "data_offset": 2048, 00:25:28.996 "data_size": 63488 00:25:28.996 }, 00:25:28.996 { 00:25:28.996 "name": "BaseBdev3", 00:25:28.996 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:28.996 "is_configured": true, 00:25:28.996 "data_offset": 2048, 00:25:28.996 "data_size": 63488 00:25:28.996 }, 00:25:28.996 { 00:25:28.996 "name": "BaseBdev4", 00:25:28.996 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:28.996 "is_configured": true, 00:25:28.996 "data_offset": 2048, 00:25:28.996 "data_size": 63488 00:25:28.996 } 00:25:28.996 ] 00:25:28.996 }' 00:25:28.996 07:26:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:28.996 07:26:02 -- common/autotest_common.sh@10 -- # set +x 00:25:29.563 07:26:03 -- bdev/bdev_raid.sh@567 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:25:29.563 07:26:03 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:25:29.563 [2024-02-13 07:26:03.216186] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:29.563 07:26:03 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=190464 00:25:29.563 07:26:03 -- bdev/bdev_raid.sh@570 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:29.563 07:26:03 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:25:29.823 07:26:03 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:25:29.823 07:26:03 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:25:29.823 07:26:03 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:25:29.823 07:26:03 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:25:29.823 07:26:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:29.823 07:26:03 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:29.823 07:26:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:29.823 07:26:03 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:29.823 07:26:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:29.823 07:26:03 -- bdev/nbd_common.sh@12 -- # local i 00:25:29.823 07:26:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:29.823 07:26:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:29.823 07:26:03 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:25:30.082 [2024-02-13 07:26:03.600162] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:30.082 /dev/nbd0 00:25:30.082 07:26:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:30.082 07:26:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:30.082 07:26:03 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:25:30.082 07:26:03 -- common/autotest_common.sh@855 -- # local i 00:25:30.082 07:26:03 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:30.082 07:26:03 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:30.082 07:26:03 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:25:30.082 07:26:03 -- common/autotest_common.sh@859 -- # break 00:25:30.082 07:26:03 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:30.082 07:26:03 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:30.082 07:26:03 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:30.082 1+0 records in 00:25:30.082 1+0 records out 00:25:30.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210385 s, 19.5 MB/s 00:25:30.082 07:26:03 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:30.082 07:26:03 -- common/autotest_common.sh@872 -- # size=4096 00:25:30.082 07:26:03 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:30.082 07:26:03 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:30.082 07:26:03 -- common/autotest_common.sh@875 -- # return 0 00:25:30.082 07:26:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:30.082 07:26:03 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:30.082 07:26:03 -- bdev/bdev_raid.sh@580 -- # '[' raid5f = raid5f ']' 00:25:30.082 07:26:03 -- bdev/bdev_raid.sh@581 -- # write_unit_size=384 00:25:30.082 07:26:03 -- bdev/bdev_raid.sh@582 -- # echo 192 00:25:30.082 07:26:03 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:25:30.650 496+0 records in 00:25:30.650 496+0 records out 00:25:30.650 97517568 bytes (98 MB, 93 MiB) copied, 0.496798 s, 196 MB/s 00:25:30.650 07:26:04 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:25:30.650 07:26:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:30.650 07:26:04 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:30.650 07:26:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:30.650 07:26:04 -- bdev/nbd_common.sh@51 -- # local i 00:25:30.650 07:26:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:30.650 07:26:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:30.909 07:26:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:30.909 07:26:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:30.909 07:26:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:30.909 07:26:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:30.909 07:26:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:30.909 07:26:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:30.909 [2024-02-13 07:26:04.445256] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:30.909 07:26:04 -- bdev/nbd_common.sh@41 -- # break 00:25:30.909 07:26:04 -- bdev/nbd_common.sh@45 -- # return 0 00:25:30.909 07:26:04 -- bdev/bdev_raid.sh@591 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:25:31.168 [2024-02-13 07:26:04.668335] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:25:31.168 07:26:04 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:31.168 07:26:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:31.168 07:26:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:31.168 07:26:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:31.168 07:26:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:31.168 07:26:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:31.168 07:26:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:31.168 07:26:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:31.168 07:26:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:31.168 07:26:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:31.168 07:26:04 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:31.168 07:26:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:31.427 07:26:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:31.427 "name": "raid_bdev1", 00:25:31.427 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:31.427 "strip_size_kb": 64, 00:25:31.427 "state": "online", 00:25:31.427 "raid_level": "raid5f", 00:25:31.427 "superblock": true, 00:25:31.427 "num_base_bdevs": 4, 00:25:31.427 "num_base_bdevs_discovered": 3, 00:25:31.427 "num_base_bdevs_operational": 3, 00:25:31.427 "base_bdevs_list": [ 00:25:31.427 { 00:25:31.427 "name": null, 00:25:31.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:31.427 "is_configured": false, 00:25:31.427 "data_offset": 2048, 00:25:31.427 "data_size": 63488 00:25:31.427 }, 00:25:31.427 { 00:25:31.427 "name": "BaseBdev2", 00:25:31.427 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:31.427 "is_configured": true, 00:25:31.427 "data_offset": 2048, 00:25:31.427 "data_size": 63488 00:25:31.427 }, 00:25:31.427 { 00:25:31.427 "name": "BaseBdev3", 00:25:31.427 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:31.427 "is_configured": true, 00:25:31.427 "data_offset": 2048, 00:25:31.427 "data_size": 63488 00:25:31.427 }, 00:25:31.427 { 00:25:31.427 "name": "BaseBdev4", 00:25:31.427 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:31.427 "is_configured": true, 00:25:31.427 "data_offset": 2048, 00:25:31.427 "data_size": 63488 00:25:31.427 } 00:25:31.427 ] 00:25:31.427 }' 00:25:31.427 07:26:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:31.427 07:26:04 -- common/autotest_common.sh@10 -- # set +x 00:25:31.995 07:26:05 -- bdev/bdev_raid.sh@597 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:32.254 [2024-02-13 07:26:05.856522] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:32.254 [2024-02-13 07:26:05.856568] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:32.254 [2024-02-13 07:26:05.867009] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002c2b0 00:25:32.254 [2024-02-13 07:26:05.874081] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:32.254 07:26:05 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:25:33.192 07:26:06 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:33.192 07:26:06 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:33.192 07:26:06 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:33.192 07:26:06 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:33.192 07:26:06 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:33.192 07:26:06 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.192 07:26:06 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:33.451 07:26:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:33.451 "name": "raid_bdev1", 00:25:33.451 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:33.451 "strip_size_kb": 64, 00:25:33.451 "state": "online", 00:25:33.451 "raid_level": "raid5f", 00:25:33.451 "superblock": true, 00:25:33.451 "num_base_bdevs": 4, 00:25:33.451 "num_base_bdevs_discovered": 4, 00:25:33.451 "num_base_bdevs_operational": 4, 00:25:33.451 "process": { 00:25:33.451 "type": "rebuild", 00:25:33.451 "target": "spare", 00:25:33.451 "progress": { 00:25:33.451 "blocks": 23040, 00:25:33.451 "percent": 12 00:25:33.451 } 00:25:33.451 }, 00:25:33.451 "base_bdevs_list": [ 00:25:33.451 { 00:25:33.451 "name": "spare", 00:25:33.451 "uuid": "83781d30-778f-516f-9263-a99ec755cf81", 00:25:33.451 "is_configured": true, 00:25:33.451 "data_offset": 2048, 00:25:33.451 "data_size": 63488 00:25:33.451 }, 00:25:33.451 { 00:25:33.451 "name": "BaseBdev2", 00:25:33.451 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:33.451 "is_configured": true, 00:25:33.451 "data_offset": 2048, 00:25:33.451 "data_size": 63488 00:25:33.451 }, 00:25:33.451 { 00:25:33.451 "name": "BaseBdev3", 00:25:33.451 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:33.451 "is_configured": true, 00:25:33.451 "data_offset": 2048, 00:25:33.451 "data_size": 63488 00:25:33.451 }, 00:25:33.451 { 00:25:33.451 "name": "BaseBdev4", 00:25:33.451 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:33.451 "is_configured": true, 00:25:33.451 "data_offset": 2048, 00:25:33.451 "data_size": 63488 00:25:33.451 } 00:25:33.451 ] 00:25:33.451 }' 00:25:33.452 07:26:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:33.711 07:26:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:33.711 07:26:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:33.711 07:26:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:33.711 07:26:07 -- bdev/bdev_raid.sh@604 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:25:33.970 [2024-02-13 07:26:07.447559] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:25:33.970 [2024-02-13 07:26:07.484897] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:25:33.970 [2024-02-13 07:26:07.484979] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:33.970 07:26:07 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:25:33.970 07:26:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:33.970 07:26:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:33.970 07:26:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:33.970 07:26:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:33.970 07:26:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:25:33.970 07:26:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:33.970 07:26:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:33.970 07:26:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:33.970 07:26:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:33.970 07:26:07 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:33.970 07:26:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:34.229 07:26:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:34.229 "name": "raid_bdev1", 00:25:34.229 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:34.229 "strip_size_kb": 64, 00:25:34.229 "state": "online", 00:25:34.229 "raid_level": "raid5f", 00:25:34.229 "superblock": true, 00:25:34.229 "num_base_bdevs": 4, 00:25:34.229 "num_base_bdevs_discovered": 3, 00:25:34.229 "num_base_bdevs_operational": 3, 00:25:34.229 "base_bdevs_list": [ 00:25:34.229 { 00:25:34.229 "name": null, 00:25:34.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:34.229 "is_configured": false, 00:25:34.229 "data_offset": 2048, 00:25:34.229 "data_size": 63488 00:25:34.229 }, 00:25:34.229 { 00:25:34.229 "name": "BaseBdev2", 00:25:34.229 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:34.229 "is_configured": true, 00:25:34.229 "data_offset": 2048, 00:25:34.229 "data_size": 63488 00:25:34.229 }, 00:25:34.229 { 00:25:34.229 "name": "BaseBdev3", 00:25:34.229 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:34.229 "is_configured": true, 00:25:34.229 "data_offset": 2048, 00:25:34.229 "data_size": 63488 00:25:34.229 }, 00:25:34.229 { 00:25:34.229 "name": "BaseBdev4", 00:25:34.229 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:34.229 "is_configured": true, 00:25:34.229 "data_offset": 2048, 00:25:34.229 "data_size": 63488 00:25:34.229 } 00:25:34.229 ] 00:25:34.229 }' 00:25:34.229 07:26:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:34.229 07:26:07 -- common/autotest_common.sh@10 -- # set +x 00:25:34.797 07:26:08 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:34.797 07:26:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:34.797 07:26:08 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:34.797 07:26:08 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:34.797 07:26:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:34.797 07:26:08 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:34.798 07:26:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:35.058 07:26:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:35.058 "name": "raid_bdev1", 00:25:35.058 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:35.058 "strip_size_kb": 64, 00:25:35.058 "state": "online", 00:25:35.058 "raid_level": "raid5f", 00:25:35.058 "superblock": true, 00:25:35.058 "num_base_bdevs": 4, 00:25:35.058 "num_base_bdevs_discovered": 3, 00:25:35.058 "num_base_bdevs_operational": 3, 00:25:35.058 "base_bdevs_list": [ 00:25:35.058 { 00:25:35.058 "name": null, 00:25:35.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:25:35.058 "is_configured": false, 00:25:35.058 "data_offset": 2048, 00:25:35.058 "data_size": 63488 00:25:35.058 }, 00:25:35.058 { 00:25:35.058 "name": "BaseBdev2", 00:25:35.058 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:35.058 "is_configured": true, 00:25:35.058 "data_offset": 2048, 00:25:35.058 "data_size": 63488 00:25:35.058 }, 00:25:35.058 { 00:25:35.058 "name": "BaseBdev3", 00:25:35.058 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:35.058 "is_configured": true, 00:25:35.058 "data_offset": 2048, 00:25:35.058 "data_size": 63488 00:25:35.058 }, 00:25:35.058 { 00:25:35.058 "name": "BaseBdev4", 00:25:35.058 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:35.058 "is_configured": true, 00:25:35.058 "data_offset": 2048, 00:25:35.058 "data_size": 63488 00:25:35.058 } 00:25:35.058 ] 00:25:35.058 }' 00:25:35.058 07:26:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:35.058 07:26:08 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:35.058 07:26:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:35.319 07:26:08 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:35.319 07:26:08 -- bdev/bdev_raid.sh@613 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:25:35.319 [2024-02-13 07:26:08.938851] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:25:35.319 [2024-02-13 07:26:08.938896] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:35.319 [2024-02-13 07:26:08.948978] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002c450 00:25:35.319 [2024-02-13 07:26:08.955449] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:25:35.319 07:26:08 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:25:36.697 07:26:09 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:36.697 07:26:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:36.697 07:26:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:36.697 07:26:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:36.697 07:26:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:36.697 07:26:09 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.697 07:26:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.697 07:26:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:36.697 "name": "raid_bdev1", 00:25:36.697 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:36.697 "strip_size_kb": 64, 00:25:36.697 "state": "online", 00:25:36.697 "raid_level": "raid5f", 00:25:36.697 "superblock": true, 00:25:36.697 "num_base_bdevs": 4, 00:25:36.697 "num_base_bdevs_discovered": 4, 00:25:36.697 "num_base_bdevs_operational": 4, 00:25:36.697 "process": { 00:25:36.697 "type": "rebuild", 00:25:36.697 "target": "spare", 00:25:36.697 "progress": { 00:25:36.697 "blocks": 23040, 00:25:36.698 "percent": 12 00:25:36.698 } 00:25:36.698 }, 00:25:36.698 "base_bdevs_list": [ 00:25:36.698 { 00:25:36.698 "name": "spare", 00:25:36.698 "uuid": "83781d30-778f-516f-9263-a99ec755cf81", 00:25:36.698 "is_configured": true, 00:25:36.698 "data_offset": 2048, 00:25:36.698 "data_size": 63488 00:25:36.698 }, 00:25:36.698 { 00:25:36.698 "name": "BaseBdev2", 00:25:36.698 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:36.698 "is_configured": true, 00:25:36.698 "data_offset": 2048, 00:25:36.698 "data_size": 63488 00:25:36.698 }, 00:25:36.698 { 00:25:36.698 "name": "BaseBdev3", 00:25:36.698 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:36.698 "is_configured": true, 00:25:36.698 "data_offset": 2048, 00:25:36.698 "data_size": 63488 00:25:36.698 }, 00:25:36.698 { 00:25:36.698 "name": "BaseBdev4", 00:25:36.698 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:36.698 "is_configured": true, 00:25:36.698 "data_offset": 2048, 00:25:36.698 "data_size": 63488 00:25:36.698 } 00:25:36.698 ] 00:25:36.698 }' 00:25:36.698 07:26:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:36.698 07:26:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:36.698 07:26:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:36.698 07:26:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:36.698 07:26:10 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:25:36.698 07:26:10 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:25:36.698 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:25:36.698 07:26:10 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:25:36.698 07:26:10 -- bdev/bdev_raid.sh@644 -- # '[' raid5f = raid1 ']' 00:25:36.698 07:26:10 -- bdev/bdev_raid.sh@657 -- # local timeout=751 00:25:36.698 07:26:10 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:36.698 07:26:10 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:36.698 07:26:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:36.698 07:26:10 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:36.698 07:26:10 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:36.698 07:26:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:36.698 07:26:10 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:36.698 07:26:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:36.957 07:26:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:36.957 "name": "raid_bdev1", 00:25:36.957 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:36.957 "strip_size_kb": 64, 00:25:36.957 "state": "online", 00:25:36.957 "raid_level": "raid5f", 00:25:36.957 "superblock": true, 00:25:36.957 "num_base_bdevs": 4, 00:25:36.957 "num_base_bdevs_discovered": 4, 00:25:36.957 "num_base_bdevs_operational": 4, 00:25:36.957 "process": { 00:25:36.957 "type": "rebuild", 00:25:36.957 "target": "spare", 00:25:36.957 "progress": { 00:25:36.957 "blocks": 28800, 00:25:36.957 "percent": 15 00:25:36.957 } 00:25:36.957 }, 00:25:36.957 "base_bdevs_list": [ 00:25:36.957 { 00:25:36.957 "name": "spare", 00:25:36.957 "uuid": "83781d30-778f-516f-9263-a99ec755cf81", 00:25:36.957 "is_configured": true, 00:25:36.957 "data_offset": 2048, 00:25:36.957 "data_size": 63488 00:25:36.957 }, 00:25:36.957 { 00:25:36.957 "name": "BaseBdev2", 00:25:36.957 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:36.957 "is_configured": true, 00:25:36.957 "data_offset": 2048, 00:25:36.957 "data_size": 63488 00:25:36.957 }, 00:25:36.957 { 00:25:36.957 "name": "BaseBdev3", 00:25:36.957 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:36.957 "is_configured": true, 00:25:36.957 "data_offset": 2048, 00:25:36.957 "data_size": 63488 00:25:36.957 }, 00:25:36.957 { 00:25:36.957 "name": "BaseBdev4", 00:25:36.957 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:36.957 "is_configured": true, 00:25:36.957 "data_offset": 2048, 00:25:36.957 "data_size": 63488 00:25:36.957 } 00:25:36.957 ] 00:25:36.957 }' 00:25:36.957 07:26:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:36.957 07:26:10 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:36.957 07:26:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:36.957 07:26:10 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:36.957 07:26:10 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:38.334 07:26:11 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:38.334 07:26:11 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:38.334 07:26:11 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:38.334 07:26:11 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:38.334 07:26:11 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:38.334 07:26:11 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:38.334 07:26:11 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:38.334 07:26:11 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:38.334 07:26:11 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:38.334 "name": "raid_bdev1", 00:25:38.334 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:38.334 "strip_size_kb": 64, 00:25:38.334 "state": "online", 00:25:38.334 "raid_level": "raid5f", 00:25:38.334 "superblock": true, 00:25:38.334 "num_base_bdevs": 4, 00:25:38.334 "num_base_bdevs_discovered": 4, 00:25:38.334 "num_base_bdevs_operational": 4, 00:25:38.334 "process": { 00:25:38.334 "type": "rebuild", 00:25:38.334 "target": "spare", 00:25:38.334 "progress": { 00:25:38.334 "blocks": 53760, 00:25:38.334 "percent": 28 00:25:38.334 } 00:25:38.334 }, 00:25:38.334 "base_bdevs_list": [ 00:25:38.334 { 00:25:38.334 "name": "spare", 00:25:38.334 "uuid": "83781d30-778f-516f-9263-a99ec755cf81", 00:25:38.334 "is_configured": true, 00:25:38.334 "data_offset": 2048, 00:25:38.334 "data_size": 63488 00:25:38.334 }, 00:25:38.334 { 00:25:38.334 "name": "BaseBdev2", 00:25:38.334 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:38.334 "is_configured": true, 00:25:38.334 "data_offset": 2048, 00:25:38.334 "data_size": 63488 00:25:38.334 }, 00:25:38.334 { 00:25:38.334 "name": "BaseBdev3", 00:25:38.334 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:38.334 "is_configured": true, 00:25:38.334 "data_offset": 2048, 00:25:38.334 "data_size": 63488 00:25:38.334 }, 00:25:38.334 { 00:25:38.334 "name": "BaseBdev4", 00:25:38.334 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:38.334 "is_configured": true, 00:25:38.334 "data_offset": 2048, 00:25:38.334 "data_size": 63488 00:25:38.334 } 00:25:38.334 ] 00:25:38.334 }' 00:25:38.334 07:26:11 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:38.334 07:26:11 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:38.334 07:26:11 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:38.334 07:26:11 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:38.334 07:26:11 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:39.271 07:26:12 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:39.271 07:26:12 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:39.271 07:26:12 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:39.271 07:26:12 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:39.271 07:26:12 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:39.271 07:26:12 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:39.271 07:26:12 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:39.271 07:26:12 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:39.530 07:26:13 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:39.530 "name": "raid_bdev1", 00:25:39.530 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:39.530 "strip_size_kb": 64, 00:25:39.530 "state": "online", 00:25:39.530 "raid_level": "raid5f", 00:25:39.530 "superblock": true, 00:25:39.530 "num_base_bdevs": 4, 00:25:39.530 "num_base_bdevs_discovered": 4, 00:25:39.530 "num_base_bdevs_operational": 4, 00:25:39.530 "process": { 00:25:39.530 "type": "rebuild", 00:25:39.530 "target": "spare", 00:25:39.530 "progress": { 00:25:39.530 "blocks": 78720, 00:25:39.530 "percent": 41 00:25:39.530 } 00:25:39.530 }, 00:25:39.530 "base_bdevs_list": [ 00:25:39.530 { 00:25:39.530 "name": "spare", 00:25:39.530 "uuid": "83781d30-778f-516f-9263-a99ec755cf81", 00:25:39.530 "is_configured": true, 00:25:39.530 "data_offset": 2048, 00:25:39.530 "data_size": 63488 00:25:39.530 }, 00:25:39.530 { 00:25:39.530 "name": "BaseBdev2", 00:25:39.530 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:39.530 "is_configured": true, 00:25:39.530 "data_offset": 2048, 00:25:39.530 "data_size": 63488 00:25:39.530 }, 00:25:39.530 { 00:25:39.530 "name": "BaseBdev3", 00:25:39.530 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:39.530 "is_configured": true, 00:25:39.530 "data_offset": 2048, 00:25:39.530 "data_size": 63488 00:25:39.530 }, 00:25:39.530 { 00:25:39.530 "name": "BaseBdev4", 00:25:39.530 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:39.530 "is_configured": true, 00:25:39.530 "data_offset": 2048, 00:25:39.530 "data_size": 63488 00:25:39.530 } 00:25:39.530 ] 00:25:39.530 }' 00:25:39.530 07:26:13 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:39.789 07:26:13 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:39.789 07:26:13 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:39.789 07:26:13 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:39.789 07:26:13 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:40.725 07:26:14 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:40.725 07:26:14 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:40.725 07:26:14 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:40.725 07:26:14 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:40.725 07:26:14 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:40.725 07:26:14 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:40.725 07:26:14 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:40.725 07:26:14 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:40.984 07:26:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:40.984 "name": "raid_bdev1", 00:25:40.984 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:40.984 "strip_size_kb": 64, 00:25:40.984 "state": "online", 00:25:40.984 "raid_level": "raid5f", 00:25:40.984 "superblock": true, 00:25:40.984 "num_base_bdevs": 4, 00:25:40.984 "num_base_bdevs_discovered": 4, 00:25:40.984 "num_base_bdevs_operational": 4, 00:25:40.984 "process": { 00:25:40.984 "type": "rebuild", 00:25:40.984 "target": "spare", 00:25:40.984 "progress": { 00:25:40.984 "blocks": 105600, 00:25:40.984 "percent": 55 00:25:40.984 } 00:25:40.984 }, 00:25:40.984 "base_bdevs_list": [ 00:25:40.984 { 00:25:40.984 "name": "spare", 00:25:40.984 "uuid": "83781d30-778f-516f-9263-a99ec755cf81", 00:25:40.984 "is_configured": true, 00:25:40.984 "data_offset": 2048, 00:25:40.984 "data_size": 63488 00:25:40.984 }, 00:25:40.984 { 00:25:40.984 "name": "BaseBdev2", 00:25:40.984 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:40.984 "is_configured": true, 00:25:40.984 "data_offset": 2048, 00:25:40.984 "data_size": 63488 00:25:40.984 }, 00:25:40.984 { 00:25:40.984 "name": "BaseBdev3", 00:25:40.984 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:40.984 "is_configured": true, 00:25:40.984 "data_offset": 2048, 00:25:40.984 "data_size": 63488 00:25:40.984 }, 00:25:40.984 { 00:25:40.984 "name": "BaseBdev4", 00:25:40.984 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:40.984 "is_configured": true, 00:25:40.984 "data_offset": 2048, 00:25:40.984 "data_size": 63488 00:25:40.984 } 00:25:40.984 ] 00:25:40.984 }' 00:25:40.984 07:26:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:40.984 07:26:14 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:40.984 07:26:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:40.984 07:26:14 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:40.984 07:26:14 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:42.361 07:26:15 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:42.361 07:26:15 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:42.361 07:26:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:42.361 07:26:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:42.361 07:26:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:42.361 07:26:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:42.361 07:26:15 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:42.361 07:26:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:42.361 07:26:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:42.361 "name": "raid_bdev1", 00:25:42.361 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:42.361 "strip_size_kb": 64, 00:25:42.361 "state": "online", 00:25:42.361 "raid_level": "raid5f", 00:25:42.361 "superblock": true, 00:25:42.361 "num_base_bdevs": 4, 00:25:42.361 "num_base_bdevs_discovered": 4, 00:25:42.361 "num_base_bdevs_operational": 4, 00:25:42.361 "process": { 00:25:42.361 "type": "rebuild", 00:25:42.361 "target": "spare", 00:25:42.361 "progress": { 00:25:42.361 "blocks": 130560, 00:25:42.361 "percent": 68 00:25:42.361 } 00:25:42.361 }, 00:25:42.361 "base_bdevs_list": [ 00:25:42.361 { 00:25:42.361 "name": "spare", 00:25:42.361 "uuid": "83781d30-778f-516f-9263-a99ec755cf81", 00:25:42.361 "is_configured": true, 00:25:42.361 "data_offset": 2048, 00:25:42.361 "data_size": 63488 00:25:42.361 }, 00:25:42.361 { 00:25:42.361 "name": "BaseBdev2", 00:25:42.361 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:42.361 "is_configured": true, 00:25:42.361 "data_offset": 2048, 00:25:42.361 "data_size": 63488 00:25:42.361 }, 00:25:42.361 { 00:25:42.361 "name": "BaseBdev3", 00:25:42.361 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:42.361 "is_configured": true, 00:25:42.361 "data_offset": 2048, 00:25:42.361 "data_size": 63488 00:25:42.361 }, 00:25:42.361 { 00:25:42.361 "name": "BaseBdev4", 00:25:42.361 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:42.361 "is_configured": true, 00:25:42.361 "data_offset": 2048, 00:25:42.361 "data_size": 63488 00:25:42.361 } 00:25:42.361 ] 00:25:42.361 }' 00:25:42.361 07:26:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:42.361 07:26:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:42.361 07:26:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:42.361 07:26:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:42.361 07:26:15 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:43.297 07:26:16 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:43.297 07:26:16 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:43.297 07:26:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:43.297 07:26:16 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:43.297 07:26:16 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:43.297 07:26:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:43.297 07:26:16 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:43.297 07:26:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:43.556 07:26:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:43.556 "name": "raid_bdev1", 00:25:43.556 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:43.556 "strip_size_kb": 64, 00:25:43.556 "state": "online", 00:25:43.556 "raid_level": "raid5f", 00:25:43.556 "superblock": true, 00:25:43.556 "num_base_bdevs": 4, 00:25:43.556 "num_base_bdevs_discovered": 4, 00:25:43.556 "num_base_bdevs_operational": 4, 00:25:43.556 "process": { 00:25:43.556 "type": "rebuild", 00:25:43.556 "target": "spare", 00:25:43.556 "progress": { 00:25:43.556 "blocks": 155520, 00:25:43.556 "percent": 81 00:25:43.556 } 00:25:43.556 }, 00:25:43.556 "base_bdevs_list": [ 00:25:43.556 { 00:25:43.556 "name": "spare", 00:25:43.556 "uuid": "83781d30-778f-516f-9263-a99ec755cf81", 00:25:43.556 "is_configured": true, 00:25:43.556 "data_offset": 2048, 00:25:43.556 "data_size": 63488 00:25:43.556 }, 00:25:43.556 { 00:25:43.556 "name": "BaseBdev2", 00:25:43.556 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:43.556 "is_configured": true, 00:25:43.556 "data_offset": 2048, 00:25:43.556 "data_size": 63488 00:25:43.556 }, 00:25:43.556 { 00:25:43.556 "name": "BaseBdev3", 00:25:43.556 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:43.556 "is_configured": true, 00:25:43.556 "data_offset": 2048, 00:25:43.556 "data_size": 63488 00:25:43.556 }, 00:25:43.556 { 00:25:43.556 "name": "BaseBdev4", 00:25:43.556 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:43.556 "is_configured": true, 00:25:43.556 "data_offset": 2048, 00:25:43.556 "data_size": 63488 00:25:43.556 } 00:25:43.556 ] 00:25:43.556 }' 00:25:43.556 07:26:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:43.815 07:26:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:43.815 07:26:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:43.815 07:26:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:43.815 07:26:17 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:44.750 07:26:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:44.750 07:26:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:44.750 07:26:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:44.750 07:26:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:44.750 07:26:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:44.750 07:26:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:44.750 07:26:18 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:44.750 07:26:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:45.009 07:26:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:45.009 "name": "raid_bdev1", 00:25:45.009 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:45.009 "strip_size_kb": 64, 00:25:45.009 "state": "online", 00:25:45.009 "raid_level": "raid5f", 00:25:45.009 "superblock": true, 00:25:45.009 "num_base_bdevs": 4, 00:25:45.009 "num_base_bdevs_discovered": 4, 00:25:45.009 "num_base_bdevs_operational": 4, 00:25:45.009 "process": { 00:25:45.009 "type": "rebuild", 00:25:45.009 "target": "spare", 00:25:45.009 "progress": { 00:25:45.009 "blocks": 182400, 00:25:45.009 "percent": 95 00:25:45.009 } 00:25:45.009 }, 00:25:45.009 "base_bdevs_list": [ 00:25:45.009 { 00:25:45.009 "name": "spare", 00:25:45.009 "uuid": "83781d30-778f-516f-9263-a99ec755cf81", 00:25:45.009 "is_configured": true, 00:25:45.009 "data_offset": 2048, 00:25:45.009 "data_size": 63488 00:25:45.009 }, 00:25:45.009 { 00:25:45.009 "name": "BaseBdev2", 00:25:45.009 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:45.009 "is_configured": true, 00:25:45.009 "data_offset": 2048, 00:25:45.009 "data_size": 63488 00:25:45.009 }, 00:25:45.009 { 00:25:45.009 "name": "BaseBdev3", 00:25:45.009 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:45.009 "is_configured": true, 00:25:45.009 "data_offset": 2048, 00:25:45.009 "data_size": 63488 00:25:45.009 }, 00:25:45.009 { 00:25:45.009 "name": "BaseBdev4", 00:25:45.009 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:45.009 "is_configured": true, 00:25:45.009 "data_offset": 2048, 00:25:45.009 "data_size": 63488 00:25:45.009 } 00:25:45.009 ] 00:25:45.009 }' 00:25:45.009 07:26:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:45.009 07:26:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:25:45.009 07:26:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:45.009 07:26:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:25:45.009 07:26:18 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:25:45.577 [2024-02-13 07:26:19.020603] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:25:45.577 [2024-02-13 07:26:19.020672] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:25:45.577 [2024-02-13 07:26:19.020841] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:46.145 07:26:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:25:46.145 07:26:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:25:46.145 07:26:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:46.145 07:26:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:25:46.145 07:26:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:25:46.145 07:26:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:46.145 07:26:19 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.145 07:26:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.403 07:26:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:46.403 "name": "raid_bdev1", 00:25:46.403 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:46.403 "strip_size_kb": 64, 00:25:46.403 "state": "online", 00:25:46.403 "raid_level": "raid5f", 00:25:46.403 "superblock": true, 00:25:46.403 "num_base_bdevs": 4, 00:25:46.403 "num_base_bdevs_discovered": 4, 00:25:46.403 "num_base_bdevs_operational": 4, 00:25:46.403 "base_bdevs_list": [ 00:25:46.403 { 00:25:46.403 "name": "spare", 00:25:46.403 "uuid": "83781d30-778f-516f-9263-a99ec755cf81", 00:25:46.403 "is_configured": true, 00:25:46.403 "data_offset": 2048, 00:25:46.403 "data_size": 63488 00:25:46.403 }, 00:25:46.403 { 00:25:46.403 "name": "BaseBdev2", 00:25:46.403 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:46.403 "is_configured": true, 00:25:46.403 "data_offset": 2048, 00:25:46.403 "data_size": 63488 00:25:46.403 }, 00:25:46.403 { 00:25:46.403 "name": "BaseBdev3", 00:25:46.403 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:46.403 "is_configured": true, 00:25:46.403 "data_offset": 2048, 00:25:46.403 "data_size": 63488 00:25:46.403 }, 00:25:46.403 { 00:25:46.403 "name": "BaseBdev4", 00:25:46.403 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:46.403 "is_configured": true, 00:25:46.403 "data_offset": 2048, 00:25:46.403 "data_size": 63488 00:25:46.403 } 00:25:46.403 ] 00:25:46.403 }' 00:25:46.403 07:26:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:46.404 07:26:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:25:46.404 07:26:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:46.404 07:26:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:25:46.404 07:26:20 -- bdev/bdev_raid.sh@660 -- # break 00:25:46.404 07:26:20 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:46.404 07:26:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:46.404 07:26:20 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:46.404 07:26:20 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:46.404 07:26:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:46.404 07:26:20 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.404 07:26:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.665 07:26:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:46.665 "name": "raid_bdev1", 00:25:46.665 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:46.665 "strip_size_kb": 64, 00:25:46.665 "state": "online", 00:25:46.665 "raid_level": "raid5f", 00:25:46.665 "superblock": true, 00:25:46.665 "num_base_bdevs": 4, 00:25:46.665 "num_base_bdevs_discovered": 4, 00:25:46.665 "num_base_bdevs_operational": 4, 00:25:46.665 "base_bdevs_list": [ 00:25:46.665 { 00:25:46.665 "name": "spare", 00:25:46.665 "uuid": "83781d30-778f-516f-9263-a99ec755cf81", 00:25:46.665 "is_configured": true, 00:25:46.665 "data_offset": 2048, 00:25:46.665 "data_size": 63488 00:25:46.665 }, 00:25:46.665 { 00:25:46.665 "name": "BaseBdev2", 00:25:46.665 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:46.665 "is_configured": true, 00:25:46.665 "data_offset": 2048, 00:25:46.665 "data_size": 63488 00:25:46.665 }, 00:25:46.665 { 00:25:46.665 "name": "BaseBdev3", 00:25:46.665 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:46.665 "is_configured": true, 00:25:46.665 "data_offset": 2048, 00:25:46.665 "data_size": 63488 00:25:46.665 }, 00:25:46.665 { 00:25:46.665 "name": "BaseBdev4", 00:25:46.665 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:46.665 "is_configured": true, 00:25:46.665 "data_offset": 2048, 00:25:46.665 "data_size": 63488 00:25:46.665 } 00:25:46.665 ] 00:25:46.665 }' 00:25:46.665 07:26:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:46.665 07:26:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:46.665 07:26:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:46.937 07:26:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:46.937 07:26:20 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:46.937 07:26:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:46.937 07:26:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:46.937 07:26:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:46.938 07:26:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:46.938 07:26:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:46.938 07:26:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:46.938 07:26:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:46.938 07:26:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:46.938 07:26:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:46.938 07:26:20 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:46.938 07:26:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:46.938 07:26:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:46.938 "name": "raid_bdev1", 00:25:46.938 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:46.938 "strip_size_kb": 64, 00:25:46.938 "state": "online", 00:25:46.938 "raid_level": "raid5f", 00:25:46.938 "superblock": true, 00:25:46.938 "num_base_bdevs": 4, 00:25:46.938 "num_base_bdevs_discovered": 4, 00:25:46.938 "num_base_bdevs_operational": 4, 00:25:46.938 "base_bdevs_list": [ 00:25:46.938 { 00:25:46.938 "name": "spare", 00:25:46.938 "uuid": "83781d30-778f-516f-9263-a99ec755cf81", 00:25:46.938 "is_configured": true, 00:25:46.938 "data_offset": 2048, 00:25:46.938 "data_size": 63488 00:25:46.938 }, 00:25:46.938 { 00:25:46.938 "name": "BaseBdev2", 00:25:46.938 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:46.938 "is_configured": true, 00:25:46.938 "data_offset": 2048, 00:25:46.938 "data_size": 63488 00:25:46.938 }, 00:25:46.938 { 00:25:46.938 "name": "BaseBdev3", 00:25:46.938 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:46.938 "is_configured": true, 00:25:46.938 "data_offset": 2048, 00:25:46.938 "data_size": 63488 00:25:46.938 }, 00:25:46.938 { 00:25:46.938 "name": "BaseBdev4", 00:25:46.938 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:46.938 "is_configured": true, 00:25:46.938 "data_offset": 2048, 00:25:46.938 "data_size": 63488 00:25:46.938 } 00:25:46.938 ] 00:25:46.938 }' 00:25:46.938 07:26:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:46.938 07:26:20 -- common/autotest_common.sh@10 -- # set +x 00:25:47.888 07:26:21 -- bdev/bdev_raid.sh@670 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:25:47.888 [2024-02-13 07:26:21.523178] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:47.888 [2024-02-13 07:26:21.523214] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:47.888 [2024-02-13 07:26:21.523308] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:47.888 [2024-02-13 07:26:21.523421] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:47.888 [2024-02-13 07:26:21.523434] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000a880 name raid_bdev1, state offline 00:25:47.888 07:26:21 -- bdev/bdev_raid.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:47.888 07:26:21 -- bdev/bdev_raid.sh@671 -- # jq length 00:25:48.147 07:26:21 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:25:48.147 07:26:21 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:25:48.147 07:26:21 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:25:48.147 07:26:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:48.147 07:26:21 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:25:48.147 07:26:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:48.147 07:26:21 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:25:48.147 07:26:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:48.147 07:26:21 -- bdev/nbd_common.sh@12 -- # local i 00:25:48.147 07:26:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:48.147 07:26:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:48.147 07:26:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:25:48.406 /dev/nbd0 00:25:48.406 07:26:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:48.406 07:26:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:48.406 07:26:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:25:48.406 07:26:21 -- common/autotest_common.sh@855 -- # local i 00:25:48.406 07:26:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:48.406 07:26:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:48.406 07:26:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:25:48.406 07:26:21 -- common/autotest_common.sh@859 -- # break 00:25:48.406 07:26:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:48.406 07:26:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:48.406 07:26:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:48.406 1+0 records in 00:25:48.406 1+0 records out 00:25:48.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193793 s, 21.1 MB/s 00:25:48.406 07:26:21 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:48.406 07:26:21 -- common/autotest_common.sh@872 -- # size=4096 00:25:48.406 07:26:21 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:48.406 07:26:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:48.406 07:26:21 -- common/autotest_common.sh@875 -- # return 0 00:25:48.406 07:26:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:48.406 07:26:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:48.406 07:26:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:25:48.676 /dev/nbd1 00:25:48.676 07:26:22 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:25:48.676 07:26:22 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:25:48.676 07:26:22 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:25:48.676 07:26:22 -- common/autotest_common.sh@855 -- # local i 00:25:48.676 07:26:22 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:25:48.676 07:26:22 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:25:48.676 07:26:22 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:25:48.676 07:26:22 -- common/autotest_common.sh@859 -- # break 00:25:48.676 07:26:22 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:25:48.676 07:26:22 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:25:48.676 07:26:22 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:48.676 1+0 records in 00:25:48.676 1+0 records out 00:25:48.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528627 s, 7.7 MB/s 00:25:48.676 07:26:22 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:48.676 07:26:22 -- common/autotest_common.sh@872 -- # size=4096 00:25:48.676 07:26:22 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:48.676 07:26:22 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:25:48.676 07:26:22 -- common/autotest_common.sh@875 -- # return 0 00:25:48.676 07:26:22 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:48.676 07:26:22 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:25:48.676 07:26:22 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:25:48.936 07:26:22 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:25:48.936 07:26:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:25:48.936 07:26:22 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:25:48.936 07:26:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:48.936 07:26:22 -- bdev/nbd_common.sh@51 -- # local i 00:25:48.936 07:26:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:48.936 07:26:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:25:49.195 07:26:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:49.195 07:26:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:49.195 07:26:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:49.195 07:26:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:49.195 07:26:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:49.195 07:26:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:49.195 07:26:22 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:49.195 07:26:22 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:49.195 07:26:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:49.195 07:26:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:49.195 07:26:22 -- bdev/nbd_common.sh@41 -- # break 00:25:49.195 07:26:22 -- bdev/nbd_common.sh@45 -- # return 0 00:25:49.195 07:26:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:49.195 07:26:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:25:49.453 07:26:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:25:49.453 07:26:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:25:49.453 07:26:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:25:49.453 07:26:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:49.453 07:26:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:49.453 07:26:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:49.453 07:26:23 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:25:49.453 07:26:23 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:25:49.453 07:26:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:49.453 07:26:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:25:49.453 07:26:23 -- bdev/nbd_common.sh@41 -- # break 00:25:49.453 07:26:23 -- bdev/nbd_common.sh@45 -- # return 0 00:25:49.453 07:26:23 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:25:49.453 07:26:23 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:49.453 07:26:23 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:25:49.453 07:26:23 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:25:49.712 07:26:23 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:25:49.970 [2024-02-13 07:26:23.584744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:25:49.970 [2024-02-13 07:26:23.584825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:49.971 [2024-02-13 07:26:23.584864] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:25:49.971 [2024-02-13 07:26:23.584884] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:49.971 [2024-02-13 07:26:23.587468] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:49.971 [2024-02-13 07:26:23.587533] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:25:49.971 [2024-02-13 07:26:23.587667] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:25:49.971 [2024-02-13 07:26:23.587738] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:25:49.971 BaseBdev1 00:25:49.971 07:26:23 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:49.971 07:26:23 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:25:49.971 07:26:23 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:25:50.230 07:26:23 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:25:50.489 [2024-02-13 07:26:24.048814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:25:50.489 [2024-02-13 07:26:24.048882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:50.489 [2024-02-13 07:26:24.048922] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:25:50.489 [2024-02-13 07:26:24.048941] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:50.489 [2024-02-13 07:26:24.049521] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:50.489 [2024-02-13 07:26:24.049609] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:25:50.489 [2024-02-13 07:26:24.049719] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:25:50.489 [2024-02-13 07:26:24.049750] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:25:50.489 [2024-02-13 07:26:24.049757] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:25:50.489 [2024-02-13 07:26:24.049779] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000bd80 name raid_bdev1, state configuring 00:25:50.489 [2024-02-13 07:26:24.049848] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:25:50.489 BaseBdev2 00:25:50.489 07:26:24 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:50.489 07:26:24 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:25:50.489 07:26:24 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:25:50.748 07:26:24 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:25:50.748 [2024-02-13 07:26:24.436875] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:25:50.748 [2024-02-13 07:26:24.436931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:50.748 [2024-02-13 07:26:24.436958] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:25:50.748 [2024-02-13 07:26:24.436979] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:50.748 [2024-02-13 07:26:24.437414] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:50.748 [2024-02-13 07:26:24.437534] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:25:50.748 [2024-02-13 07:26:24.437611] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:25:50.748 [2024-02-13 07:26:24.437633] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:25:50.748 BaseBdev3 00:25:51.007 07:26:24 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:25:51.007 07:26:24 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:25:51.007 07:26:24 -- bdev/bdev_raid.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:25:51.007 07:26:24 -- bdev/bdev_raid.sh@699 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:25:51.265 [2024-02-13 07:26:24.832973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:25:51.265 [2024-02-13 07:26:24.833050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:51.265 [2024-02-13 07:26:24.833095] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:25:51.265 [2024-02-13 07:26:24.833127] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:51.265 [2024-02-13 07:26:24.833634] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:51.265 [2024-02-13 07:26:24.833720] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:25:51.265 [2024-02-13 07:26:24.833850] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:25:51.265 [2024-02-13 07:26:24.833878] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:25:51.265 BaseBdev4 00:25:51.265 07:26:24 -- bdev/bdev_raid.sh@701 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:25:51.524 07:26:25 -- bdev/bdev_raid.sh@702 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:25:51.524 [2024-02-13 07:26:25.209023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:25:51.524 [2024-02-13 07:26:25.209088] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:51.524 [2024-02-13 07:26:25.209116] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:25:51.524 [2024-02-13 07:26:25.209137] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:51.524 [2024-02-13 07:26:25.209616] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:51.524 [2024-02-13 07:26:25.209697] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:25:51.524 [2024-02-13 07:26:25.209802] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:25:51.524 [2024-02-13 07:26:25.209863] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:25:51.524 spare 00:25:51.782 07:26:25 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:25:51.782 07:26:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:25:51.782 07:26:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:25:51.782 07:26:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid5f 00:25:51.782 07:26:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:25:51.782 07:26:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:25:51.782 07:26:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:25:51.782 07:26:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:25:51.782 07:26:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:25:51.782 07:26:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:25:51.782 07:26:25 -- bdev/bdev_raid.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:51.782 07:26:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:51.782 [2024-02-13 07:26:25.309986] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x61600000c380 00:25:51.782 [2024-02-13 07:26:25.310007] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:25:51.783 [2024-02-13 07:26:25.310138] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00004d200 00:25:51.783 [2024-02-13 07:26:25.315339] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x61600000c380 00:25:51.783 [2024-02-13 07:26:25.315359] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x61600000c380 00:25:51.783 [2024-02-13 07:26:25.315524] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:51.783 07:26:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:25:51.783 "name": "raid_bdev1", 00:25:51.783 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:51.783 "strip_size_kb": 64, 00:25:51.783 "state": "online", 00:25:51.783 "raid_level": "raid5f", 00:25:51.783 "superblock": true, 00:25:51.783 "num_base_bdevs": 4, 00:25:51.783 "num_base_bdevs_discovered": 4, 00:25:51.783 "num_base_bdevs_operational": 4, 00:25:51.783 "base_bdevs_list": [ 00:25:51.783 { 00:25:51.783 "name": "spare", 00:25:51.783 "uuid": "83781d30-778f-516f-9263-a99ec755cf81", 00:25:51.783 "is_configured": true, 00:25:51.783 "data_offset": 2048, 00:25:51.783 "data_size": 63488 00:25:51.783 }, 00:25:51.783 { 00:25:51.783 "name": "BaseBdev2", 00:25:51.783 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:51.783 "is_configured": true, 00:25:51.783 "data_offset": 2048, 00:25:51.783 "data_size": 63488 00:25:51.783 }, 00:25:51.783 { 00:25:51.783 "name": "BaseBdev3", 00:25:51.783 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:51.783 "is_configured": true, 00:25:51.783 "data_offset": 2048, 00:25:51.783 "data_size": 63488 00:25:51.783 }, 00:25:51.783 { 00:25:51.783 "name": "BaseBdev4", 00:25:51.783 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:51.783 "is_configured": true, 00:25:51.783 "data_offset": 2048, 00:25:51.783 "data_size": 63488 00:25:51.783 } 00:25:51.783 ] 00:25:51.783 }' 00:25:51.783 07:26:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:25:51.783 07:26:25 -- common/autotest_common.sh@10 -- # set +x 00:25:52.720 07:26:26 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:25:52.720 07:26:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:25:52.720 07:26:26 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:25:52.720 07:26:26 -- bdev/bdev_raid.sh@185 -- # local target=none 00:25:52.720 07:26:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:25:52.720 07:26:26 -- bdev/bdev_raid.sh@188 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.720 07:26:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:25:52.720 07:26:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:25:52.720 "name": "raid_bdev1", 00:25:52.720 "uuid": "2b81de98-913d-4f5c-a11f-8b3678f1cc2f", 00:25:52.720 "strip_size_kb": 64, 00:25:52.720 "state": "online", 00:25:52.720 "raid_level": "raid5f", 00:25:52.720 "superblock": true, 00:25:52.720 "num_base_bdevs": 4, 00:25:52.720 "num_base_bdevs_discovered": 4, 00:25:52.720 "num_base_bdevs_operational": 4, 00:25:52.720 "base_bdevs_list": [ 00:25:52.720 { 00:25:52.720 "name": "spare", 00:25:52.720 "uuid": "83781d30-778f-516f-9263-a99ec755cf81", 00:25:52.720 "is_configured": true, 00:25:52.720 "data_offset": 2048, 00:25:52.720 "data_size": 63488 00:25:52.720 }, 00:25:52.720 { 00:25:52.720 "name": "BaseBdev2", 00:25:52.720 "uuid": "796c971a-090c-57e6-b5bd-d76d066689e0", 00:25:52.720 "is_configured": true, 00:25:52.720 "data_offset": 2048, 00:25:52.720 "data_size": 63488 00:25:52.720 }, 00:25:52.720 { 00:25:52.720 "name": "BaseBdev3", 00:25:52.720 "uuid": "8c13133e-0b7a-53af-a9bb-83e8a6405f1a", 00:25:52.720 "is_configured": true, 00:25:52.720 "data_offset": 2048, 00:25:52.720 "data_size": 63488 00:25:52.720 }, 00:25:52.720 { 00:25:52.720 "name": "BaseBdev4", 00:25:52.720 "uuid": "0ac5eba1-d864-525c-a47b-9dfb1f53565c", 00:25:52.720 "is_configured": true, 00:25:52.720 "data_offset": 2048, 00:25:52.720 "data_size": 63488 00:25:52.720 } 00:25:52.720 ] 00:25:52.720 }' 00:25:52.720 07:26:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:25:52.720 07:26:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:25:52.720 07:26:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:25:52.720 07:26:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:25:52.720 07:26:26 -- bdev/bdev_raid.sh@706 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:25:52.720 07:26:26 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:25:52.979 07:26:26 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:25:52.979 07:26:26 -- bdev/bdev_raid.sh@709 -- # killprocess 137202 00:25:52.979 07:26:26 -- common/autotest_common.sh@924 -- # '[' -z 137202 ']' 00:25:52.979 07:26:26 -- common/autotest_common.sh@928 -- # kill -0 137202 00:25:52.979 07:26:26 -- common/autotest_common.sh@929 -- # uname 00:25:52.979 07:26:26 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:25:52.979 07:26:26 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 137202 00:25:52.979 killing process with pid 137202 00:25:52.979 Received shutdown signal, test time was about 60.000000 seconds 00:25:52.979 00:25:52.979 Latency(us) 00:25:52.979 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.979 =================================================================================================================== 00:25:52.979 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:52.979 07:26:26 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:25:52.979 07:26:26 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:25:52.979 07:26:26 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 137202' 00:25:52.979 07:26:26 -- common/autotest_common.sh@943 -- # kill 137202 00:25:52.979 07:26:26 -- common/autotest_common.sh@948 -- # wait 137202 00:25:52.979 [2024-02-13 07:26:26.621555] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:52.979 [2024-02-13 07:26:26.621657] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:52.979 [2024-02-13 07:26:26.621768] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:52.979 [2024-02-13 07:26:26.621789] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x61600000c380 name raid_bdev1, state offline 00:25:53.565 [2024-02-13 07:26:26.943338] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:54.501 ************************************ 00:25:54.501 END TEST raid5f_rebuild_test_sb 00:25:54.501 ************************************ 00:25:54.501 07:26:27 -- bdev/bdev_raid.sh@711 -- # return 0 00:25:54.501 00:25:54.501 real 0m29.069s 00:25:54.501 user 0m44.347s 00:25:54.501 sys 0m3.046s 00:25:54.501 07:26:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:54.501 07:26:27 -- common/autotest_common.sh@10 -- # set +x 00:25:54.501 07:26:27 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:25:54.501 00:25:54.501 real 12m18.889s 00:25:54.501 user 20m30.464s 00:25:54.501 sys 1m30.215s 00:25:54.501 07:26:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:54.501 07:26:27 -- common/autotest_common.sh@10 -- # set +x 00:25:54.501 ************************************ 00:25:54.501 END TEST bdev_raid 00:25:54.501 ************************************ 00:25:54.501 07:26:27 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:25:54.501 07:26:27 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:25:54.501 07:26:27 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:25:54.501 07:26:27 -- common/autotest_common.sh@10 -- # set +x 00:25:54.501 ************************************ 00:25:54.501 START TEST bdevperf_config 00:25:54.501 ************************************ 00:25:54.501 07:26:28 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:25:54.501 * Looking for test storage... 00:25:54.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:25:54.501 07:26:28 -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:25:54.501 07:26:28 -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:25:54.501 07:26:28 -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:25:54.501 07:26:28 -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:54.501 07:26:28 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:54.501 07:26:28 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:25:54.501 07:26:28 -- bdevperf/common.sh@8 -- # local job_section=global 00:25:54.501 07:26:28 -- bdevperf/common.sh@9 -- # local rw=read 00:25:54.501 07:26:28 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:25:54.501 07:26:28 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:25:54.501 07:26:28 -- bdevperf/common.sh@13 -- # cat 00:25:54.501 00:25:54.501 07:26:28 -- bdevperf/common.sh@18 -- # job='[global]' 00:25:54.501 07:26:28 -- bdevperf/common.sh@19 -- # echo 00:25:54.501 07:26:28 -- bdevperf/common.sh@20 -- # cat 00:25:54.501 07:26:28 -- bdevperf/test_config.sh@18 -- # create_job job0 00:25:54.501 07:26:28 -- bdevperf/common.sh@8 -- # local job_section=job0 00:25:54.501 07:26:28 -- bdevperf/common.sh@9 -- # local rw= 00:25:54.501 07:26:28 -- bdevperf/common.sh@10 -- # local filename= 00:25:54.501 00:25:54.501 07:26:28 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:25:54.501 07:26:28 -- bdevperf/common.sh@18 -- # job='[job0]' 00:25:54.501 07:26:28 -- bdevperf/common.sh@19 -- # echo 00:25:54.501 07:26:28 -- bdevperf/common.sh@20 -- # cat 00:25:54.501 07:26:28 -- bdevperf/test_config.sh@19 -- # create_job job1 00:25:54.501 07:26:28 -- bdevperf/common.sh@8 -- # local job_section=job1 00:25:54.501 07:26:28 -- bdevperf/common.sh@9 -- # local rw= 00:25:54.501 07:26:28 -- bdevperf/common.sh@10 -- # local filename= 00:25:54.501 00:25:54.501 07:26:28 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:25:54.501 07:26:28 -- bdevperf/common.sh@18 -- # job='[job1]' 00:25:54.501 07:26:28 -- bdevperf/common.sh@19 -- # echo 00:25:54.501 07:26:28 -- bdevperf/common.sh@20 -- # cat 00:25:54.501 07:26:28 -- bdevperf/test_config.sh@20 -- # create_job job2 00:25:54.501 07:26:28 -- bdevperf/common.sh@8 -- # local job_section=job2 00:25:54.501 07:26:28 -- bdevperf/common.sh@9 -- # local rw= 00:25:54.501 07:26:28 -- bdevperf/common.sh@10 -- # local filename= 00:25:54.501 00:25:54.501 07:26:28 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:25:54.501 07:26:28 -- bdevperf/common.sh@18 -- # job='[job2]' 00:25:54.501 07:26:28 -- bdevperf/common.sh@19 -- # echo 00:25:54.501 07:26:28 -- bdevperf/common.sh@20 -- # cat 00:25:54.501 07:26:28 -- bdevperf/test_config.sh@21 -- # create_job job3 00:25:54.501 07:26:28 -- bdevperf/common.sh@8 -- # local job_section=job3 00:25:54.501 07:26:28 -- bdevperf/common.sh@9 -- # local rw= 00:25:54.501 07:26:28 -- bdevperf/common.sh@10 -- # local filename= 00:25:54.501 07:26:28 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:25:54.501 07:26:28 -- bdevperf/common.sh@18 -- # job='[job3]' 00:25:54.501 00:25:54.501 07:26:28 -- bdevperf/common.sh@19 -- # echo 00:25:54.501 07:26:28 -- bdevperf/common.sh@20 -- # cat 00:25:54.501 07:26:28 -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:58.691 07:26:32 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-02-13 07:26:28.157387] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:25:58.691 [2024-02-13 07:26:28.157575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138031 ] 00:25:58.691 Using job config with 4 jobs 00:25:58.691 [2024-02-13 07:26:28.332738] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.691 [2024-02-13 07:26:28.526214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.691 [2024-02-13 07:26:28.526357] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:25:58.691 cpumask for '\''job0'\'' is too big 00:25:58.691 cpumask for '\''job1'\'' is too big 00:25:58.691 cpumask for '\''job2'\'' is too big 00:25:58.691 cpumask for '\''job3'\'' is too big 00:25:58.691 Running I/O for 2 seconds... 00:25:58.691 00:25:58.691 Latency(us) 00:25:58.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.691 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:58.691 Malloc0 : 2.01 33477.33 32.69 0.00 0.00 7641.12 1422.43 11856.06 00:25:58.691 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:58.691 Malloc0 : 2.01 33454.99 32.67 0.00 0.00 7633.19 1407.53 10366.60 00:25:58.691 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:58.691 Malloc0 : 2.02 33496.39 32.71 0.00 0.00 7612.32 1370.30 8996.31 00:25:58.691 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:58.691 Malloc0 : 2.02 33474.54 32.69 0.00 0.00 7604.13 1362.85 8519.68 00:25:58.691 =================================================================================================================== 00:25:58.691 Total : 133903.26 130.76 0.00 0.00 7622.66 1362.85 11856.06 00:25:58.691 [2024-02-13 07:26:30.957161] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:25:58.691 07:26:32 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-02-13 07:26:28.157387] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:25:58.691 [2024-02-13 07:26:28.157575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138031 ] 00:25:58.691 Using job config with 4 jobs 00:25:58.691 [2024-02-13 07:26:28.332738] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.691 [2024-02-13 07:26:28.526214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.692 [2024-02-13 07:26:28.526357] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:25:58.692 cpumask for '\''job0'\'' is too big 00:25:58.692 cpumask for '\''job1'\'' is too big 00:25:58.692 cpumask for '\''job2'\'' is too big 00:25:58.692 cpumask for '\''job3'\'' is too big 00:25:58.692 Running I/O for 2 seconds... 00:25:58.692 00:25:58.692 Latency(us) 00:25:58.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.692 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:58.692 Malloc0 : 2.01 33477.33 32.69 0.00 0.00 7641.12 1422.43 11856.06 00:25:58.692 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:58.692 Malloc0 : 2.01 33454.99 32.67 0.00 0.00 7633.19 1407.53 10366.60 00:25:58.692 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:58.692 Malloc0 : 2.02 33496.39 32.71 0.00 0.00 7612.32 1370.30 8996.31 00:25:58.692 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:58.692 Malloc0 : 2.02 33474.54 32.69 0.00 0.00 7604.13 1362.85 8519.68 00:25:58.692 =================================================================================================================== 00:25:58.692 Total : 133903.26 130.76 0.00 0.00 7622.66 1362.85 11856.06 00:25:58.692 [2024-02-13 07:26:30.957161] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:25:58.692 07:26:32 -- bdevperf/common.sh@32 -- # echo '[2024-02-13 07:26:28.157387] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:25:58.692 [2024-02-13 07:26:28.157575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138031 ] 00:25:58.692 Using job config with 4 jobs 00:25:58.692 [2024-02-13 07:26:28.332738] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.692 [2024-02-13 07:26:28.526214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.692 [2024-02-13 07:26:28.526357] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:25:58.692 cpumask for '\''job0'\'' is too big 00:25:58.692 cpumask for '\''job1'\'' is too big 00:25:58.692 cpumask for '\''job2'\'' is too big 00:25:58.692 cpumask for '\''job3'\'' is too big 00:25:58.692 Running I/O for 2 seconds... 00:25:58.692 00:25:58.692 Latency(us) 00:25:58.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.692 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:58.692 Malloc0 : 2.01 33477.33 32.69 0.00 0.00 7641.12 1422.43 11856.06 00:25:58.692 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:58.692 Malloc0 : 2.01 33454.99 32.67 0.00 0.00 7633.19 1407.53 10366.60 00:25:58.692 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:58.692 Malloc0 : 2.02 33496.39 32.71 0.00 0.00 7612.32 1370.30 8996.31 00:25:58.692 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:25:58.692 Malloc0 : 2.02 33474.54 32.69 0.00 0.00 7604.13 1362.85 8519.68 00:25:58.692 =================================================================================================================== 00:25:58.692 Total : 133903.26 130.76 0.00 0.00 7622.66 1362.85 11856.06 00:25:58.692 [2024-02-13 07:26:30.957161] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:25:58.692 07:26:32 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:25:58.692 07:26:32 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:25:58.692 07:26:32 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:25:58.692 07:26:32 -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:25:58.692 [2024-02-13 07:26:32.179707] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:25:58.692 [2024-02-13 07:26:32.179854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138089 ] 00:25:58.692 [2024-02-13 07:26:32.332036] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:58.951 [2024-02-13 07:26:32.524781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.951 [2024-02-13 07:26:32.524903] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:25:59.518 cpumask for 'job0' is too big 00:25:59.518 cpumask for 'job1' is too big 00:25:59.518 cpumask for 'job2' is too big 00:25:59.518 cpumask for 'job3' is too big 00:26:01.422 [2024-02-13 07:26:34.952612] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:02.799 07:26:36 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:26:02.799 Running I/O for 2 seconds... 00:26:02.799 00:26:02.799 Latency(us) 00:26:02.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.799 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:02.799 Malloc0 : 2.01 33356.99 32.58 0.00 0.00 7667.89 1467.11 11856.06 00:26:02.799 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:02.799 Malloc0 : 2.01 33334.72 32.55 0.00 0.00 7660.49 1377.75 10426.18 00:26:02.799 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:02.799 Malloc0 : 2.02 33361.96 32.58 0.00 0.00 7641.07 1414.98 9055.88 00:26:02.799 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:26:02.799 Malloc0 : 2.02 33336.78 32.56 0.00 0.00 7634.30 1377.75 8936.73 00:26:02.799 =================================================================================================================== 00:26:02.799 Total : 133390.45 130.26 0.00 0.00 7650.91 1377.75 11856.06' 00:26:02.799 07:26:36 -- bdevperf/test_config.sh@27 -- # cleanup 00:26:02.799 07:26:36 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:02.799 07:26:36 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:26:02.799 07:26:36 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:02.799 07:26:36 -- bdevperf/common.sh@9 -- # local rw=write 00:26:02.799 07:26:36 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:02.799 07:26:36 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:02.799 07:26:36 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:02.799 07:26:36 -- bdevperf/common.sh@19 -- # echo 00:26:02.799 00:26:02.799 07:26:36 -- bdevperf/common.sh@20 -- # cat 00:26:02.799 07:26:36 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:26:02.799 00:26:02.799 07:26:36 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:02.799 07:26:36 -- bdevperf/common.sh@9 -- # local rw=write 00:26:02.799 07:26:36 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:02.799 07:26:36 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:02.799 07:26:36 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:02.799 07:26:36 -- bdevperf/common.sh@19 -- # echo 00:26:02.799 07:26:36 -- bdevperf/common.sh@20 -- # cat 00:26:02.799 07:26:36 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:26:02.799 07:26:36 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:02.799 00:26:02.799 07:26:36 -- bdevperf/common.sh@9 -- # local rw=write 00:26:02.799 07:26:36 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:26:02.799 07:26:36 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:02.799 07:26:36 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:02.799 07:26:36 -- bdevperf/common.sh@19 -- # echo 00:26:02.799 07:26:36 -- bdevperf/common.sh@20 -- # cat 00:26:02.799 07:26:36 -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:06.985 07:26:40 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-02-13 07:26:36.196291] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:06.985 [2024-02-13 07:26:36.196474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138145 ] 00:26:06.985 Using job config with 3 jobs 00:26:06.985 [2024-02-13 07:26:36.364441] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.985 [2024-02-13 07:26:36.545212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.985 [2024-02-13 07:26:36.545363] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:06.985 cpumask for '\''job0'\'' is too big 00:26:06.985 cpumask for '\''job1'\'' is too big 00:26:06.985 cpumask for '\''job2'\'' is too big 00:26:06.985 Running I/O for 2 seconds... 00:26:06.985 00:26:06.985 Latency(us) 00:26:06.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.985 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:06.985 Malloc0 : 2.01 45010.48 43.96 0.00 0.00 5681.20 1429.88 8519.68 00:26:06.985 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:06.985 Malloc0 : 2.01 45022.62 43.97 0.00 0.00 5670.49 1318.17 7179.17 00:26:06.985 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:06.985 Malloc0 : 2.01 44993.25 43.94 0.00 0.00 5665.51 1362.85 6225.92 00:26:06.985 =================================================================================================================== 00:26:06.985 Total : 135026.35 131.86 0.00 0.00 5672.39 1318.17 8519.68 00:26:06.985 [2024-02-13 07:26:38.971904] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:26:06.985 07:26:40 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-02-13 07:26:36.196291] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:06.985 [2024-02-13 07:26:36.196474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138145 ] 00:26:06.985 Using job config with 3 jobs 00:26:06.985 [2024-02-13 07:26:36.364441] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.985 [2024-02-13 07:26:36.545212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.985 [2024-02-13 07:26:36.545363] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:06.985 cpumask for '\''job0'\'' is too big 00:26:06.985 cpumask for '\''job1'\'' is too big 00:26:06.985 cpumask for '\''job2'\'' is too big 00:26:06.985 Running I/O for 2 seconds... 00:26:06.985 00:26:06.985 Latency(us) 00:26:06.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.985 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:06.985 Malloc0 : 2.01 45010.48 43.96 0.00 0.00 5681.20 1429.88 8519.68 00:26:06.985 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:06.985 Malloc0 : 2.01 45022.62 43.97 0.00 0.00 5670.49 1318.17 7179.17 00:26:06.985 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:06.985 Malloc0 : 2.01 44993.25 43.94 0.00 0.00 5665.51 1362.85 6225.92 00:26:06.985 =================================================================================================================== 00:26:06.985 Total : 135026.35 131.86 0.00 0.00 5672.39 1318.17 8519.68 00:26:06.985 [2024-02-13 07:26:38.971904] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:26:06.985 07:26:40 -- bdevperf/common.sh@32 -- # echo '[2024-02-13 07:26:36.196291] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:06.985 [2024-02-13 07:26:36.196474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138145 ] 00:26:06.985 Using job config with 3 jobs 00:26:06.985 [2024-02-13 07:26:36.364441] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.985 [2024-02-13 07:26:36.545212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.985 [2024-02-13 07:26:36.545363] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:06.985 cpumask for '\''job0'\'' is too big 00:26:06.985 cpumask for '\''job1'\'' is too big 00:26:06.985 cpumask for '\''job2'\'' is too big 00:26:06.985 Running I/O for 2 seconds... 00:26:06.985 00:26:06.985 Latency(us) 00:26:06.985 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.985 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:06.985 Malloc0 : 2.01 45010.48 43.96 0.00 0.00 5681.20 1429.88 8519.68 00:26:06.985 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:06.985 Malloc0 : 2.01 45022.62 43.97 0.00 0.00 5670.49 1318.17 7179.17 00:26:06.986 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:26:06.986 Malloc0 : 2.01 44993.25 43.94 0.00 0.00 5665.51 1362.85 6225.92 00:26:06.986 =================================================================================================================== 00:26:06.986 Total : 135026.35 131.86 0.00 0.00 5672.39 1318.17 8519.68 00:26:06.986 [2024-02-13 07:26:38.971904] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:26:06.986 07:26:40 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:06.986 07:26:40 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:06.986 07:26:40 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:26:06.986 07:26:40 -- bdevperf/test_config.sh@35 -- # cleanup 00:26:06.986 07:26:40 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:06.986 07:26:40 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:26:06.986 07:26:40 -- bdevperf/common.sh@8 -- # local job_section=global 00:26:06.986 07:26:40 -- bdevperf/common.sh@9 -- # local rw=rw 00:26:06.986 07:26:40 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:26:06.986 07:26:40 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:26:06.986 07:26:40 -- bdevperf/common.sh@13 -- # cat 00:26:06.986 07:26:40 -- bdevperf/common.sh@18 -- # job='[global]' 00:26:06.986 07:26:40 -- bdevperf/common.sh@19 -- # echo 00:26:06.986 00:26:06.986 07:26:40 -- bdevperf/common.sh@20 -- # cat 00:26:06.986 07:26:40 -- bdevperf/test_config.sh@38 -- # create_job job0 00:26:06.986 00:26:06.986 07:26:40 -- bdevperf/common.sh@8 -- # local job_section=job0 00:26:06.986 07:26:40 -- bdevperf/common.sh@9 -- # local rw= 00:26:06.986 07:26:40 -- bdevperf/common.sh@10 -- # local filename= 00:26:06.986 07:26:40 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:26:06.986 07:26:40 -- bdevperf/common.sh@18 -- # job='[job0]' 00:26:06.986 07:26:40 -- bdevperf/common.sh@19 -- # echo 00:26:06.986 07:26:40 -- bdevperf/common.sh@20 -- # cat 00:26:06.986 07:26:40 -- bdevperf/test_config.sh@39 -- # create_job job1 00:26:06.986 07:26:40 -- bdevperf/common.sh@8 -- # local job_section=job1 00:26:06.986 00:26:06.986 07:26:40 -- bdevperf/common.sh@9 -- # local rw= 00:26:06.986 07:26:40 -- bdevperf/common.sh@10 -- # local filename= 00:26:06.986 07:26:40 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:26:06.986 07:26:40 -- bdevperf/common.sh@18 -- # job='[job1]' 00:26:06.986 07:26:40 -- bdevperf/common.sh@19 -- # echo 00:26:06.986 07:26:40 -- bdevperf/common.sh@20 -- # cat 00:26:06.986 07:26:40 -- bdevperf/test_config.sh@40 -- # create_job job2 00:26:06.986 00:26:06.986 07:26:40 -- bdevperf/common.sh@8 -- # local job_section=job2 00:26:06.986 07:26:40 -- bdevperf/common.sh@9 -- # local rw= 00:26:06.986 07:26:40 -- bdevperf/common.sh@10 -- # local filename= 00:26:06.986 07:26:40 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:26:06.986 07:26:40 -- bdevperf/common.sh@18 -- # job='[job2]' 00:26:06.986 07:26:40 -- bdevperf/common.sh@19 -- # echo 00:26:06.986 07:26:40 -- bdevperf/common.sh@20 -- # cat 00:26:06.986 07:26:40 -- bdevperf/test_config.sh@41 -- # create_job job3 00:26:06.986 00:26:06.986 07:26:40 -- bdevperf/common.sh@8 -- # local job_section=job3 00:26:06.986 07:26:40 -- bdevperf/common.sh@9 -- # local rw= 00:26:06.986 07:26:40 -- bdevperf/common.sh@10 -- # local filename= 00:26:06.986 07:26:40 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:26:06.986 07:26:40 -- bdevperf/common.sh@18 -- # job='[job3]' 00:26:06.986 07:26:40 -- bdevperf/common.sh@19 -- # echo 00:26:06.986 07:26:40 -- bdevperf/common.sh@20 -- # cat 00:26:06.986 07:26:40 -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:11.180 07:26:44 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-02-13 07:26:40.206715] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:11.180 [2024-02-13 07:26:40.206902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138203 ] 00:26:11.180 Using job config with 4 jobs 00:26:11.180 [2024-02-13 07:26:40.360510] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.180 [2024-02-13 07:26:40.557516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.180 [2024-02-13 07:26:40.557645] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:11.180 cpumask for '\''job0'\'' is too big 00:26:11.180 cpumask for '\''job1'\'' is too big 00:26:11.180 cpumask for '\''job2'\'' is too big 00:26:11.180 cpumask for '\''job3'\'' is too big 00:26:11.180 Running I/O for 2 seconds... 00:26:11.180 00:26:11.180 Latency(us) 00:26:11.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.180 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.180 Malloc0 : 2.03 16528.63 16.14 0.00 0.00 15490.95 2934.23 24188.74 00:26:11.180 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.180 Malloc1 : 2.03 16517.17 16.13 0.00 0.00 15489.52 3425.75 24188.74 00:26:11.180 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.180 Malloc0 : 2.03 16506.44 16.12 0.00 0.00 15459.71 2815.07 21328.99 00:26:11.180 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.180 Malloc1 : 2.03 16495.49 16.11 0.00 0.00 15457.40 3366.17 21328.99 00:26:11.180 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc0 : 2.03 16484.69 16.10 0.00 0.00 15425.83 2904.44 18350.08 00:26:11.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc1 : 2.04 16473.67 16.09 0.00 0.00 15426.82 3395.96 18350.08 00:26:11.181 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc0 : 2.04 16463.07 16.08 0.00 0.00 15397.43 2874.65 16681.89 00:26:11.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc1 : 2.04 16452.04 16.07 0.00 0.00 15397.13 3366.17 16681.89 00:26:11.181 =================================================================================================================== 00:26:11.181 Total : 131921.20 128.83 0.00 0.00 15443.10 2815.07 24188.74 00:26:11.181 [2024-02-13 07:26:43.016958] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:26:11.181 07:26:44 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-02-13 07:26:40.206715] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:11.181 [2024-02-13 07:26:40.206902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138203 ] 00:26:11.181 Using job config with 4 jobs 00:26:11.181 [2024-02-13 07:26:40.360510] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.181 [2024-02-13 07:26:40.557516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.181 [2024-02-13 07:26:40.557645] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:11.181 cpumask for '\''job0'\'' is too big 00:26:11.181 cpumask for '\''job1'\'' is too big 00:26:11.181 cpumask for '\''job2'\'' is too big 00:26:11.181 cpumask for '\''job3'\'' is too big 00:26:11.181 Running I/O for 2 seconds... 00:26:11.181 00:26:11.181 Latency(us) 00:26:11.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.181 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc0 : 2.03 16528.63 16.14 0.00 0.00 15490.95 2934.23 24188.74 00:26:11.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc1 : 2.03 16517.17 16.13 0.00 0.00 15489.52 3425.75 24188.74 00:26:11.181 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc0 : 2.03 16506.44 16.12 0.00 0.00 15459.71 2815.07 21328.99 00:26:11.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc1 : 2.03 16495.49 16.11 0.00 0.00 15457.40 3366.17 21328.99 00:26:11.181 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc0 : 2.03 16484.69 16.10 0.00 0.00 15425.83 2904.44 18350.08 00:26:11.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc1 : 2.04 16473.67 16.09 0.00 0.00 15426.82 3395.96 18350.08 00:26:11.181 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc0 : 2.04 16463.07 16.08 0.00 0.00 15397.43 2874.65 16681.89 00:26:11.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc1 : 2.04 16452.04 16.07 0.00 0.00 15397.13 3366.17 16681.89 00:26:11.181 =================================================================================================================== 00:26:11.181 Total : 131921.20 128.83 0.00 0.00 15443.10 2815.07 24188.74 00:26:11.181 [2024-02-13 07:26:43.016958] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:26:11.181 07:26:44 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:26:11.181 07:26:44 -- bdevperf/common.sh@32 -- # echo '[2024-02-13 07:26:40.206715] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:11.181 [2024-02-13 07:26:40.206902] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138203 ] 00:26:11.181 Using job config with 4 jobs 00:26:11.181 [2024-02-13 07:26:40.360510] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:11.181 [2024-02-13 07:26:40.557516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.181 [2024-02-13 07:26:40.557645] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:11.181 cpumask for '\''job0'\'' is too big 00:26:11.181 cpumask for '\''job1'\'' is too big 00:26:11.181 cpumask for '\''job2'\'' is too big 00:26:11.181 cpumask for '\''job3'\'' is too big 00:26:11.181 Running I/O for 2 seconds... 00:26:11.181 00:26:11.181 Latency(us) 00:26:11.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.181 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc0 : 2.03 16528.63 16.14 0.00 0.00 15490.95 2934.23 24188.74 00:26:11.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc1 : 2.03 16517.17 16.13 0.00 0.00 15489.52 3425.75 24188.74 00:26:11.181 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc0 : 2.03 16506.44 16.12 0.00 0.00 15459.71 2815.07 21328.99 00:26:11.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc1 : 2.03 16495.49 16.11 0.00 0.00 15457.40 3366.17 21328.99 00:26:11.181 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc0 : 2.03 16484.69 16.10 0.00 0.00 15425.83 2904.44 18350.08 00:26:11.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc1 : 2.04 16473.67 16.09 0.00 0.00 15426.82 3395.96 18350.08 00:26:11.181 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc0 : 2.04 16463.07 16.08 0.00 0.00 15397.43 2874.65 16681.89 00:26:11.181 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:26:11.181 Malloc1 : 2.04 16452.04 16.07 0.00 0.00 15397.13 3366.17 16681.89 00:26:11.181 =================================================================================================================== 00:26:11.181 Total : 131921.20 128.83 0.00 0.00 15443.10 2815.07 24188.74 00:26:11.181 [2024-02-13 07:26:43.016958] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation '\''spdk_subsystem_init_from_json_config is deprecated'\'' scheduled for removal in v24.09 hit 1 times' 00:26:11.181 07:26:44 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:26:11.181 07:26:44 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:26:11.181 07:26:44 -- bdevperf/test_config.sh@44 -- # cleanup 00:26:11.181 07:26:44 -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:26:11.181 07:26:44 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:11.181 00:26:11.181 real 0m16.175s 00:26:11.181 user 0m14.486s 00:26:11.181 sys 0m1.117s 00:26:11.181 ************************************ 00:26:11.181 END TEST bdevperf_config 00:26:11.181 ************************************ 00:26:11.181 07:26:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:11.181 07:26:44 -- common/autotest_common.sh@10 -- # set +x 00:26:11.181 07:26:44 -- spdk/autotest.sh@198 -- # uname -s 00:26:11.181 07:26:44 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:26:11.181 07:26:44 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:11.181 07:26:44 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:26:11.181 07:26:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:26:11.181 07:26:44 -- common/autotest_common.sh@10 -- # set +x 00:26:11.181 ************************************ 00:26:11.181 START TEST reactor_set_interrupt 00:26:11.181 ************************************ 00:26:11.181 07:26:44 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:11.181 * Looking for test storage... 00:26:11.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:11.181 07:26:44 -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:26:11.181 07:26:44 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh 00:26:11.181 07:26:44 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:11.181 07:26:44 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:11.181 07:26:44 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:26:11.181 07:26:44 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:11.181 07:26:44 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:11.181 07:26:44 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:11.181 07:26:44 -- common/autotest_common.sh@34 -- # set -e 00:26:11.181 07:26:44 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:11.181 07:26:44 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:11.181 07:26:44 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:11.181 07:26:44 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:11.181 07:26:44 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:11.181 07:26:44 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:26:11.181 07:26:44 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:26:11.181 07:26:44 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:26:11.181 07:26:44 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:26:11.181 07:26:44 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:26:11.181 07:26:44 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:26:11.181 07:26:44 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:26:11.181 07:26:44 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:26:11.181 07:26:44 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:26:11.181 07:26:44 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:26:11.182 07:26:44 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:26:11.182 07:26:44 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:26:11.182 07:26:44 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:26:11.182 07:26:44 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:26:11.182 07:26:44 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:26:11.182 07:26:44 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:26:11.182 07:26:44 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:26:11.182 07:26:44 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:26:11.182 07:26:44 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:26:11.182 07:26:44 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:26:11.182 07:26:44 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:11.182 07:26:44 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:26:11.182 07:26:44 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:26:11.182 07:26:44 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:26:11.182 07:26:44 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:26:11.182 07:26:44 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:26:11.182 07:26:44 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:11.182 07:26:44 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:26:11.182 07:26:44 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:26:11.182 07:26:44 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:26:11.182 07:26:44 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:26:11.182 07:26:44 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:26:11.182 07:26:44 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:26:11.182 07:26:44 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:26:11.182 07:26:44 -- common/build_config.sh@36 -- # CONFIG_IPSEC_MB=n 00:26:11.182 07:26:44 -- common/build_config.sh@37 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:11.182 07:26:44 -- common/build_config.sh@38 -- # CONFIG_ASAN=y 00:26:11.182 07:26:44 -- common/build_config.sh@39 -- # CONFIG_SHARED=n 00:26:11.182 07:26:44 -- common/build_config.sh@40 -- # CONFIG_VTUNE_DIR= 00:26:11.182 07:26:44 -- common/build_config.sh@41 -- # CONFIG_RDMA_SET_TOS=y 00:26:11.182 07:26:44 -- common/build_config.sh@42 -- # CONFIG_VBDEV_COMPRESS=n 00:26:11.182 07:26:44 -- common/build_config.sh@43 -- # CONFIG_VFIO_USER_DIR= 00:26:11.182 07:26:44 -- common/build_config.sh@44 -- # CONFIG_FUZZER_LIB= 00:26:11.182 07:26:44 -- common/build_config.sh@45 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:11.182 07:26:44 -- common/build_config.sh@46 -- # CONFIG_USDT=n 00:26:11.182 07:26:44 -- common/build_config.sh@47 -- # CONFIG_URING_ZNS=n 00:26:11.182 07:26:44 -- common/build_config.sh@48 -- # CONFIG_FC_PATH= 00:26:11.182 07:26:44 -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:26:11.182 07:26:44 -- common/build_config.sh@50 -- # CONFIG_CUSTOMOCF=n 00:26:11.182 07:26:44 -- common/build_config.sh@51 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:11.182 07:26:44 -- common/build_config.sh@52 -- # CONFIG_WERROR=y 00:26:11.182 07:26:44 -- common/build_config.sh@53 -- # CONFIG_DEBUG=y 00:26:11.182 07:26:44 -- common/build_config.sh@54 -- # CONFIG_RDMA=y 00:26:11.182 07:26:44 -- common/build_config.sh@55 -- # CONFIG_HAVE_ARC4RANDOM=n 00:26:11.182 07:26:44 -- common/build_config.sh@56 -- # CONFIG_FUZZER=n 00:26:11.182 07:26:44 -- common/build_config.sh@57 -- # CONFIG_FC=n 00:26:11.182 07:26:44 -- common/build_config.sh@58 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:26:11.182 07:26:44 -- common/build_config.sh@59 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:11.182 07:26:44 -- common/build_config.sh@60 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:11.182 07:26:44 -- common/build_config.sh@61 -- # CONFIG_CROSS_PREFIX= 00:26:11.182 07:26:44 -- common/build_config.sh@62 -- # CONFIG_PREFIX=/usr/local 00:26:11.182 07:26:44 -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBBSD=n 00:26:11.182 07:26:44 -- common/build_config.sh@64 -- # CONFIG_UBSAN=y 00:26:11.182 07:26:44 -- common/build_config.sh@65 -- # CONFIG_PGO_CAPTURE=n 00:26:11.182 07:26:44 -- common/build_config.sh@66 -- # CONFIG_UBLK=n 00:26:11.182 07:26:44 -- common/build_config.sh@67 -- # CONFIG_ISAL_CRYPTO=y 00:26:11.182 07:26:44 -- common/build_config.sh@68 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:11.182 07:26:44 -- common/build_config.sh@69 -- # CONFIG_CRYPTO=n 00:26:11.182 07:26:44 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:26:11.182 07:26:44 -- common/build_config.sh@71 -- # CONFIG_LIBDIR= 00:26:11.182 07:26:44 -- common/build_config.sh@72 -- # CONFIG_IPSEC_MB_DIR= 00:26:11.182 07:26:44 -- common/build_config.sh@73 -- # CONFIG_PGO_USE=n 00:26:11.182 07:26:44 -- common/build_config.sh@74 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:11.182 07:26:44 -- common/build_config.sh@75 -- # CONFIG_GOLANG=n 00:26:11.182 07:26:44 -- common/build_config.sh@76 -- # CONFIG_VHOST=y 00:26:11.182 07:26:44 -- common/build_config.sh@77 -- # CONFIG_IDXD=y 00:26:11.182 07:26:44 -- common/build_config.sh@78 -- # CONFIG_AVAHI=n 00:26:11.182 07:26:44 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:26:11.182 07:26:44 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:11.182 07:26:44 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:11.182 07:26:44 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:11.182 07:26:44 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:11.182 07:26:44 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:11.182 07:26:44 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:11.182 07:26:44 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:11.182 07:26:44 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:11.182 07:26:44 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:11.182 07:26:44 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:11.182 07:26:44 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:11.182 07:26:44 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:11.182 07:26:44 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:11.182 07:26:44 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:11.182 07:26:44 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:11.182 07:26:44 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:11.182 #define SPDK_CONFIG_H 00:26:11.182 #define SPDK_CONFIG_APPS 1 00:26:11.182 #define SPDK_CONFIG_ARCH native 00:26:11.182 #define SPDK_CONFIG_ASAN 1 00:26:11.182 #undef SPDK_CONFIG_AVAHI 00:26:11.182 #undef SPDK_CONFIG_CET 00:26:11.182 #define SPDK_CONFIG_COVERAGE 1 00:26:11.182 #define SPDK_CONFIG_CROSS_PREFIX 00:26:11.182 #undef SPDK_CONFIG_CRYPTO 00:26:11.182 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:11.182 #undef SPDK_CONFIG_CUSTOMOCF 00:26:11.182 #undef SPDK_CONFIG_DAOS 00:26:11.182 #define SPDK_CONFIG_DAOS_DIR 00:26:11.182 #define SPDK_CONFIG_DEBUG 1 00:26:11.182 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:11.182 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:11.182 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:11.182 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:11.182 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:11.182 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:11.182 #define SPDK_CONFIG_EXAMPLES 1 00:26:11.182 #undef SPDK_CONFIG_FC 00:26:11.182 #define SPDK_CONFIG_FC_PATH 00:26:11.182 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:11.182 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:11.182 #undef SPDK_CONFIG_FUSE 00:26:11.182 #undef SPDK_CONFIG_FUZZER 00:26:11.182 #define SPDK_CONFIG_FUZZER_LIB 00:26:11.182 #undef SPDK_CONFIG_GOLANG 00:26:11.182 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:26:11.182 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:11.182 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:11.182 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:11.182 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:11.182 #define SPDK_CONFIG_IDXD 1 00:26:11.182 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:11.182 #undef SPDK_CONFIG_IPSEC_MB 00:26:11.182 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:11.182 #define SPDK_CONFIG_ISAL 1 00:26:11.182 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:11.182 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:11.182 #define SPDK_CONFIG_LIBDIR 00:26:11.182 #undef SPDK_CONFIG_LTO 00:26:11.182 #define SPDK_CONFIG_MAX_LCORES 00:26:11.182 #define SPDK_CONFIG_NVME_CUSE 1 00:26:11.182 #undef SPDK_CONFIG_OCF 00:26:11.182 #define SPDK_CONFIG_OCF_PATH 00:26:11.182 #define SPDK_CONFIG_OPENSSL_PATH 00:26:11.182 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:11.182 #undef SPDK_CONFIG_PGO_USE 00:26:11.182 #define SPDK_CONFIG_PREFIX /usr/local 00:26:11.182 #define SPDK_CONFIG_RAID5F 1 00:26:11.182 #undef SPDK_CONFIG_RBD 00:26:11.182 #define SPDK_CONFIG_RDMA 1 00:26:11.182 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:11.182 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:11.182 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:11.182 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:11.182 #undef SPDK_CONFIG_SHARED 00:26:11.182 #undef SPDK_CONFIG_SMA 00:26:11.182 #define SPDK_CONFIG_TESTS 1 00:26:11.182 #undef SPDK_CONFIG_TSAN 00:26:11.182 #undef SPDK_CONFIG_UBLK 00:26:11.182 #define SPDK_CONFIG_UBSAN 1 00:26:11.182 #define SPDK_CONFIG_UNIT_TESTS 1 00:26:11.182 #undef SPDK_CONFIG_URING 00:26:11.182 #define SPDK_CONFIG_URING_PATH 00:26:11.182 #undef SPDK_CONFIG_URING_ZNS 00:26:11.182 #undef SPDK_CONFIG_USDT 00:26:11.182 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:11.182 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:11.182 #undef SPDK_CONFIG_VFIO_USER 00:26:11.182 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:11.182 #define SPDK_CONFIG_VHOST 1 00:26:11.182 #define SPDK_CONFIG_VIRTIO 1 00:26:11.182 #undef SPDK_CONFIG_VTUNE 00:26:11.182 #define SPDK_CONFIG_VTUNE_DIR 00:26:11.182 #define SPDK_CONFIG_WERROR 1 00:26:11.182 #define SPDK_CONFIG_WPDK_DIR 00:26:11.182 #undef SPDK_CONFIG_XNVME 00:26:11.182 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:11.182 07:26:44 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:11.182 07:26:44 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:11.182 07:26:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.182 07:26:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.182 07:26:44 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:11.182 07:26:44 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:11.182 07:26:44 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:11.182 07:26:44 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:11.183 07:26:44 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:11.183 07:26:44 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:11.183 07:26:44 -- pm/common@16 -- # TEST_TAG=N/A 00:26:11.183 07:26:44 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:11.183 07:26:44 -- common/autotest_common.sh@52 -- # : 1 00:26:11.183 07:26:44 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:26:11.183 07:26:44 -- common/autotest_common.sh@56 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:11.183 07:26:44 -- common/autotest_common.sh@58 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:26:11.183 07:26:44 -- common/autotest_common.sh@60 -- # : 1 00:26:11.183 07:26:44 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:11.183 07:26:44 -- common/autotest_common.sh@62 -- # : 1 00:26:11.183 07:26:44 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:26:11.183 07:26:44 -- common/autotest_common.sh@64 -- # : 00:26:11.183 07:26:44 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:26:11.183 07:26:44 -- common/autotest_common.sh@66 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:26:11.183 07:26:44 -- common/autotest_common.sh@68 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:26:11.183 07:26:44 -- common/autotest_common.sh@70 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:26:11.183 07:26:44 -- common/autotest_common.sh@72 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:11.183 07:26:44 -- common/autotest_common.sh@74 -- # : 1 00:26:11.183 07:26:44 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:26:11.183 07:26:44 -- common/autotest_common.sh@76 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:26:11.183 07:26:44 -- common/autotest_common.sh@78 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:26:11.183 07:26:44 -- common/autotest_common.sh@80 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:26:11.183 07:26:44 -- common/autotest_common.sh@82 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:26:11.183 07:26:44 -- common/autotest_common.sh@84 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:26:11.183 07:26:44 -- common/autotest_common.sh@86 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:26:11.183 07:26:44 -- common/autotest_common.sh@88 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:26:11.183 07:26:44 -- common/autotest_common.sh@90 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:11.183 07:26:44 -- common/autotest_common.sh@92 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:26:11.183 07:26:44 -- common/autotest_common.sh@94 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:26:11.183 07:26:44 -- common/autotest_common.sh@96 -- # : rdma 00:26:11.183 07:26:44 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:11.183 07:26:44 -- common/autotest_common.sh@98 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:26:11.183 07:26:44 -- common/autotest_common.sh@100 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:26:11.183 07:26:44 -- common/autotest_common.sh@102 -- # : 1 00:26:11.183 07:26:44 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:26:11.183 07:26:44 -- common/autotest_common.sh@104 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:26:11.183 07:26:44 -- common/autotest_common.sh@106 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:26:11.183 07:26:44 -- common/autotest_common.sh@108 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:26:11.183 07:26:44 -- common/autotest_common.sh@110 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:26:11.183 07:26:44 -- common/autotest_common.sh@112 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:11.183 07:26:44 -- common/autotest_common.sh@114 -- # : 1 00:26:11.183 07:26:44 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:26:11.183 07:26:44 -- common/autotest_common.sh@116 -- # : 1 00:26:11.183 07:26:44 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:26:11.183 07:26:44 -- common/autotest_common.sh@118 -- # : 00:26:11.183 07:26:44 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:11.183 07:26:44 -- common/autotest_common.sh@120 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:26:11.183 07:26:44 -- common/autotest_common.sh@122 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:26:11.183 07:26:44 -- common/autotest_common.sh@124 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:26:11.183 07:26:44 -- common/autotest_common.sh@126 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:26:11.183 07:26:44 -- common/autotest_common.sh@128 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:26:11.183 07:26:44 -- common/autotest_common.sh@130 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:26:11.183 07:26:44 -- common/autotest_common.sh@132 -- # : 00:26:11.183 07:26:44 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:26:11.183 07:26:44 -- common/autotest_common.sh@134 -- # : true 00:26:11.183 07:26:44 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:26:11.183 07:26:44 -- common/autotest_common.sh@136 -- # : 1 00:26:11.183 07:26:44 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:26:11.183 07:26:44 -- common/autotest_common.sh@138 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:26:11.183 07:26:44 -- common/autotest_common.sh@140 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:26:11.183 07:26:44 -- common/autotest_common.sh@142 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:26:11.183 07:26:44 -- common/autotest_common.sh@144 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:26:11.183 07:26:44 -- common/autotest_common.sh@146 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:26:11.183 07:26:44 -- common/autotest_common.sh@148 -- # : 00:26:11.183 07:26:44 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:26:11.183 07:26:44 -- common/autotest_common.sh@150 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:26:11.183 07:26:44 -- common/autotest_common.sh@152 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:26:11.183 07:26:44 -- common/autotest_common.sh@154 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:26:11.183 07:26:44 -- common/autotest_common.sh@156 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:26:11.183 07:26:44 -- common/autotest_common.sh@158 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:26:11.183 07:26:44 -- common/autotest_common.sh@161 -- # : 00:26:11.183 07:26:44 -- common/autotest_common.sh@162 -- # export SPDK_TEST_FUZZER_TARGET 00:26:11.183 07:26:44 -- common/autotest_common.sh@163 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@164 -- # export SPDK_TEST_NVMF_MDNS 00:26:11.183 07:26:44 -- common/autotest_common.sh@165 -- # : 0 00:26:11.183 07:26:44 -- common/autotest_common.sh@166 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:11.183 07:26:44 -- common/autotest_common.sh@169 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:11.183 07:26:44 -- common/autotest_common.sh@169 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:11.183 07:26:44 -- common/autotest_common.sh@170 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:11.183 07:26:44 -- common/autotest_common.sh@170 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:11.183 07:26:44 -- common/autotest_common.sh@171 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:11.183 07:26:44 -- common/autotest_common.sh@171 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:11.183 07:26:44 -- common/autotest_common.sh@172 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:11.183 07:26:44 -- common/autotest_common.sh@172 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:11.183 07:26:44 -- common/autotest_common.sh@175 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:11.183 07:26:44 -- common/autotest_common.sh@175 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:11.183 07:26:44 -- common/autotest_common.sh@179 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:11.183 07:26:44 -- common/autotest_common.sh@179 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:11.183 07:26:44 -- common/autotest_common.sh@183 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:11.183 07:26:44 -- common/autotest_common.sh@183 -- # PYTHONDONTWRITEBYTECODE=1 00:26:11.183 07:26:44 -- common/autotest_common.sh@187 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:11.183 07:26:44 -- common/autotest_common.sh@187 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:11.183 07:26:44 -- common/autotest_common.sh@188 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:11.183 07:26:44 -- common/autotest_common.sh@188 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:11.183 07:26:44 -- common/autotest_common.sh@192 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:11.183 07:26:44 -- common/autotest_common.sh@193 -- # rm -rf /var/tmp/asan_suppression_file 00:26:11.184 07:26:44 -- common/autotest_common.sh@194 -- # cat 00:26:11.184 07:26:44 -- common/autotest_common.sh@220 -- # echo leak:libfuse3.so 00:26:11.184 07:26:44 -- common/autotest_common.sh@222 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:11.184 07:26:44 -- common/autotest_common.sh@222 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:11.184 07:26:44 -- common/autotest_common.sh@224 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:11.184 07:26:44 -- common/autotest_common.sh@224 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:11.184 07:26:44 -- common/autotest_common.sh@226 -- # '[' -z /var/spdk/dependencies ']' 00:26:11.184 07:26:44 -- common/autotest_common.sh@229 -- # export DEPENDENCY_DIR 00:26:11.184 07:26:44 -- common/autotest_common.sh@233 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:11.184 07:26:44 -- common/autotest_common.sh@233 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:11.184 07:26:44 -- common/autotest_common.sh@234 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:11.184 07:26:44 -- common/autotest_common.sh@234 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:11.184 07:26:44 -- common/autotest_common.sh@237 -- # export QEMU_BIN= 00:26:11.184 07:26:44 -- common/autotest_common.sh@237 -- # QEMU_BIN= 00:26:11.184 07:26:44 -- common/autotest_common.sh@238 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:11.184 07:26:44 -- common/autotest_common.sh@238 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:11.184 07:26:44 -- common/autotest_common.sh@240 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:11.184 07:26:44 -- common/autotest_common.sh@240 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:11.184 07:26:44 -- common/autotest_common.sh@243 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:11.184 07:26:44 -- common/autotest_common.sh@243 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:11.184 07:26:44 -- common/autotest_common.sh@246 -- # '[' 0 -eq 0 ']' 00:26:11.184 07:26:44 -- common/autotest_common.sh@247 -- # export valgrind= 00:26:11.184 07:26:44 -- common/autotest_common.sh@247 -- # valgrind= 00:26:11.184 07:26:44 -- common/autotest_common.sh@253 -- # uname -s 00:26:11.184 07:26:44 -- common/autotest_common.sh@253 -- # '[' Linux = Linux ']' 00:26:11.184 07:26:44 -- common/autotest_common.sh@254 -- # HUGEMEM=4096 00:26:11.184 07:26:44 -- common/autotest_common.sh@255 -- # export CLEAR_HUGE=yes 00:26:11.184 07:26:44 -- common/autotest_common.sh@255 -- # CLEAR_HUGE=yes 00:26:11.184 07:26:44 -- common/autotest_common.sh@256 -- # [[ 0 -eq 1 ]] 00:26:11.184 07:26:44 -- common/autotest_common.sh@256 -- # [[ 0 -eq 1 ]] 00:26:11.184 07:26:44 -- common/autotest_common.sh@263 -- # MAKE=make 00:26:11.184 07:26:44 -- common/autotest_common.sh@264 -- # MAKEFLAGS=-j10 00:26:11.184 07:26:44 -- common/autotest_common.sh@280 -- # export HUGEMEM=4096 00:26:11.184 07:26:44 -- common/autotest_common.sh@280 -- # HUGEMEM=4096 00:26:11.184 07:26:44 -- common/autotest_common.sh@282 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:11.184 07:26:44 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:26:11.184 07:26:44 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:26:11.184 07:26:44 -- common/autotest_common.sh@307 -- # [[ -z 138295 ]] 00:26:11.184 07:26:44 -- common/autotest_common.sh@307 -- # kill -0 138295 00:26:11.184 07:26:44 -- common/autotest_common.sh@1663 -- # set_test_storage 2147483648 00:26:11.184 07:26:44 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:26:11.184 07:26:44 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:26:11.184 07:26:44 -- common/autotest_common.sh@320 -- # local mount target_dir 00:26:11.184 07:26:44 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:26:11.184 07:26:44 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:26:11.184 07:26:44 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:26:11.184 07:26:44 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:26:11.184 07:26:44 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.o00SLa 00:26:11.184 07:26:44 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:11.184 07:26:44 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:26:11.184 07:26:44 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:26:11.184 07:26:44 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.o00SLa/tests/interrupt /tmp/spdk.o00SLa 00:26:11.184 07:26:44 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:26:11.184 07:26:44 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:11.184 07:26:44 -- common/autotest_common.sh@316 -- # df -T 00:26:11.184 07:26:44 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # mounts["$mount"]=udev 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # avails["$mount"]=6230982656 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6230982656 00:26:11.184 07:26:44 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:26:11.184 07:26:44 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # avails["$mount"]=1250992128 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1255759872 00:26:11.184 07:26:44 -- common/autotest_common.sh@352 -- # uses["$mount"]=4767744 00:26:11.184 07:26:44 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda1 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # avails["$mount"]=11009417216 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20616794112 00:26:11.184 07:26:44 -- common/autotest_common.sh@352 -- # uses["$mount"]=9590599680 00:26:11.184 07:26:44 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # avails["$mount"]=6276194304 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6278787072 00:26:11.184 07:26:44 -- common/autotest_common.sh@352 -- # uses["$mount"]=2592768 00:26:11.184 07:26:44 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # avails["$mount"]=5242880 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5242880 00:26:11.184 07:26:44 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:26:11.184 07:26:44 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # avails["$mount"]=6278787072 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6278787072 00:26:11.184 07:26:44 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:26:11.184 07:26:44 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop0 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # sizes["$mount"]=66453504 00:26:11.184 07:26:44 -- common/autotest_common.sh@352 -- # uses["$mount"]=66453504 00:26:11.184 07:26:44 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop1 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # sizes["$mount"]=96337920 00:26:11.184 07:26:44 -- common/autotest_common.sh@352 -- # uses["$mount"]=96337920 00:26:11.184 07:26:44 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop2 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # sizes["$mount"]=52297728 00:26:11.184 07:26:44 -- common/autotest_common.sh@352 -- # uses["$mount"]=52297728 00:26:11.184 07:26:44 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda3 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # avails["$mount"]=98705408 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # sizes["$mount"]=109422592 00:26:11.184 07:26:44 -- common/autotest_common.sh@352 -- # uses["$mount"]=10718208 00:26:11.184 07:26:44 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # avails["$mount"]=1255755776 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1255755776 00:26:11.184 07:26:44 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:26:11.184 07:26:44 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # avails["$mount"]=97961512960 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:26:11.184 07:26:44 -- common/autotest_common.sh@352 -- # uses["$mount"]=1741266944 00:26:11.184 07:26:44 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop3 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # sizes["$mount"]=42467328 00:26:11.184 07:26:44 -- common/autotest_common.sh@352 -- # uses["$mount"]=42467328 00:26:11.184 07:26:44 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop4 00:26:11.184 07:26:44 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:26:11.184 07:26:44 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:26:11.185 07:26:44 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:26:11.185 07:26:44 -- common/autotest_common.sh@352 -- # uses["$mount"]=67108864 00:26:11.185 07:26:44 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:11.185 07:26:44 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:26:11.185 * Looking for test storage... 00:26:11.185 07:26:44 -- common/autotest_common.sh@357 -- # local target_space new_size 00:26:11.185 07:26:44 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:26:11.185 07:26:44 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:11.185 07:26:44 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:11.185 07:26:44 -- common/autotest_common.sh@361 -- # mount=/ 00:26:11.185 07:26:44 -- common/autotest_common.sh@363 -- # target_space=11009417216 00:26:11.185 07:26:44 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:26:11.185 07:26:44 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:26:11.185 07:26:44 -- common/autotest_common.sh@369 -- # [[ ext4 == tmpfs ]] 00:26:11.185 07:26:44 -- common/autotest_common.sh@369 -- # [[ ext4 == ramfs ]] 00:26:11.185 07:26:44 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:26:11.185 07:26:44 -- common/autotest_common.sh@370 -- # new_size=11805192192 00:26:11.185 07:26:44 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:11.185 07:26:44 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:11.185 07:26:44 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:11.185 07:26:44 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:11.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:11.185 07:26:44 -- common/autotest_common.sh@378 -- # return 0 00:26:11.185 07:26:44 -- common/autotest_common.sh@1665 -- # set -o errtrace 00:26:11.185 07:26:44 -- common/autotest_common.sh@1666 -- # shopt -s extdebug 00:26:11.185 07:26:44 -- common/autotest_common.sh@1667 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:11.185 07:26:44 -- common/autotest_common.sh@1669 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:11.185 07:26:44 -- common/autotest_common.sh@1670 -- # true 00:26:11.185 07:26:44 -- common/autotest_common.sh@1672 -- # xtrace_fd 00:26:11.185 07:26:44 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:11.185 07:26:44 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:11.185 07:26:44 -- common/autotest_common.sh@27 -- # exec 00:26:11.185 07:26:44 -- common/autotest_common.sh@29 -- # exec 00:26:11.185 07:26:44 -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:11.185 07:26:44 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:11.185 07:26:44 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:11.185 07:26:44 -- common/autotest_common.sh@18 -- # set -x 00:26:11.185 07:26:44 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:11.185 07:26:44 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:26:11.185 07:26:44 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:26:11.185 07:26:44 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:26:11.185 07:26:44 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:26:11.185 07:26:44 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:11.185 07:26:44 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:11.185 07:26:44 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:11.185 07:26:44 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:26:11.185 07:26:44 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.185 07:26:44 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:11.185 07:26:44 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=138345 00:26:11.185 07:26:44 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:11.185 07:26:44 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:11.185 07:26:44 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 138345 /var/tmp/spdk.sock 00:26:11.185 07:26:44 -- common/autotest_common.sh@817 -- # '[' -z 138345 ']' 00:26:11.185 07:26:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:11.185 07:26:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:11.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:11.185 07:26:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:11.185 07:26:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:11.185 07:26:44 -- common/autotest_common.sh@10 -- # set +x 00:26:11.185 [2024-02-13 07:26:44.451894] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:11.185 [2024-02-13 07:26:44.452060] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138345 ] 00:26:11.185 [2024-02-13 07:26:44.616731] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:11.185 [2024-02-13 07:26:44.790096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:11.185 [2024-02-13 07:26:44.790200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:11.185 [2024-02-13 07:26:44.790207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.444 [2024-02-13 07:26:45.054255] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:12.034 07:26:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:12.034 07:26:45 -- common/autotest_common.sh@850 -- # return 0 00:26:12.034 07:26:45 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:26:12.034 07:26:45 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:12.034 Malloc0 00:26:12.034 Malloc1 00:26:12.034 Malloc2 00:26:12.034 07:26:45 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:26:12.034 07:26:45 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:12.034 07:26:45 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:12.034 07:26:45 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:12.034 5000+0 records in 00:26:12.034 5000+0 records out 00:26:12.034 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0178373 s, 574 MB/s 00:26:12.034 07:26:45 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:12.602 AIO0 00:26:12.602 07:26:46 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 138345 00:26:12.602 07:26:46 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 138345 without_thd 00:26:12.602 07:26:46 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=138345 00:26:12.602 07:26:46 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:26:12.602 07:26:46 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:12.602 07:26:46 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:12.602 07:26:46 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:26:12.602 07:26:46 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:12.602 07:26:46 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:26:12.602 07:26:46 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:12.602 07:26:46 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:12.602 07:26:46 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:12.602 07:26:46 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:26:12.602 07:26:46 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:12.602 07:26:46 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:12.602 07:26:46 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:26:12.602 07:26:46 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:12.602 07:26:46 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:26:12.602 07:26:46 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:12.602 07:26:46 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:12.602 07:26:46 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:26:12.861 07:26:46 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:12.861 spdk_thread ids are 1 on reactor0. 00:26:12.861 07:26:46 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:12.861 07:26:46 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:12.861 07:26:46 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 138345 0 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 138345 0 idle 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@33 -- # local pid=138345 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 138345 -w 256 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 138345 root 20 0 20.1t 143228 28888 S 0.0 1.2 0:00.65 reactor_0' 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@48 -- # echo 138345 root 20 0 20.1t 143228 28888 S 0.0 1.2 0:00.65 reactor_0 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:12.861 07:26:46 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:12.861 07:26:46 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 138345 1 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 138345 1 idle 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@33 -- # local pid=138345 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 138345 -w 256 00:26:12.861 07:26:46 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 138348 root 20 0 20.1t 143228 28888 S 0.0 1.2 0:00.00 reactor_1' 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@48 -- # echo 138348 root 20 0 20.1t 143228 28888 S 0.0 1.2 0:00.00 reactor_1 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:13.120 07:26:46 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:13.120 07:26:46 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 138345 2 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 138345 2 idle 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@33 -- # local pid=138345 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 138345 -w 256 00:26:13.120 07:26:46 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:13.379 07:26:46 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 138349 root 20 0 20.1t 143228 28888 S 0.0 1.2 0:00.00 reactor_2' 00:26:13.379 07:26:46 -- interrupt/interrupt_common.sh@48 -- # echo 138349 root 20 0 20.1t 143228 28888 S 0.0 1.2 0:00.00 reactor_2 00:26:13.379 07:26:46 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:13.379 07:26:46 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:13.379 07:26:46 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:13.379 07:26:46 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:13.379 07:26:46 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:13.379 07:26:46 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:13.379 07:26:46 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:13.379 07:26:46 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:13.379 07:26:46 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:26:13.379 07:26:46 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:26:13.379 07:26:46 -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:26:13.379 [2024-02-13 07:26:47.050833] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:13.379 07:26:47 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:13.638 [2024-02-13 07:26:47.290609] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:13.638 [2024-02-13 07:26:47.291153] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:13.638 07:26:47 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:13.897 [2024-02-13 07:26:47.558386] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:13.897 [2024-02-13 07:26:47.558736] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:13.897 07:26:47 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:13.897 07:26:47 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 138345 0 00:26:13.897 07:26:47 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 138345 0 busy 00:26:13.897 07:26:47 -- interrupt/interrupt_common.sh@33 -- # local pid=138345 00:26:13.897 07:26:47 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:13.897 07:26:47 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:13.897 07:26:47 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:13.897 07:26:47 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:13.897 07:26:47 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:13.897 07:26:47 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:13.897 07:26:47 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 138345 -w 256 00:26:13.897 07:26:47 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 138345 root 20 0 20.1t 143340 28888 R 87.5 1.2 0:01.09 reactor_0' 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@48 -- # echo 138345 root 20 0 20.1t 143340 28888 R 87.5 1.2 0:01.09 reactor_0 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=87.5 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=87 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@51 -- # [[ 87 -lt 70 ]] 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:14.156 07:26:47 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:14.156 07:26:47 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 138345 2 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 138345 2 busy 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@33 -- # local pid=138345 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 138345 -w 256 00:26:14.156 07:26:47 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:14.415 07:26:47 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 138349 root 20 0 20.1t 143340 28888 R 93.3 1.2 0:00.33 reactor_2' 00:26:14.415 07:26:47 -- interrupt/interrupt_common.sh@48 -- # echo 138349 root 20 0 20.1t 143340 28888 R 93.3 1.2 0:00.33 reactor_2 00:26:14.415 07:26:47 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:14.415 07:26:47 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:14.415 07:26:47 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=93.3 00:26:14.415 07:26:47 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=93 00:26:14.415 07:26:47 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:14.415 07:26:47 -- interrupt/interrupt_common.sh@51 -- # [[ 93 -lt 70 ]] 00:26:14.415 07:26:47 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:14.415 07:26:47 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:14.415 07:26:47 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:14.415 [2024-02-13 07:26:48.090385] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:14.415 [2024-02-13 07:26:48.090564] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:14.415 07:26:48 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:26:14.415 07:26:48 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 138345 2 00:26:14.415 07:26:48 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 138345 2 idle 00:26:14.415 07:26:48 -- interrupt/interrupt_common.sh@33 -- # local pid=138345 00:26:14.415 07:26:48 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:14.415 07:26:48 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:14.415 07:26:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:14.415 07:26:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:14.415 07:26:48 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:14.415 07:26:48 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:14.415 07:26:48 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:14.415 07:26:48 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:14.415 07:26:48 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 138345 -w 256 00:26:14.674 07:26:48 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 138349 root 20 0 20.1t 143408 28888 S 0.0 1.2 0:00.53 reactor_2' 00:26:14.674 07:26:48 -- interrupt/interrupt_common.sh@48 -- # echo 138349 root 20 0 20.1t 143408 28888 S 0.0 1.2 0:00.53 reactor_2 00:26:14.674 07:26:48 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:14.674 07:26:48 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:14.674 07:26:48 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:14.674 07:26:48 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:14.674 07:26:48 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:14.674 07:26:48 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:14.674 07:26:48 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:14.674 07:26:48 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:14.674 07:26:48 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:14.933 [2024-02-13 07:26:48.450356] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:14.933 [2024-02-13 07:26:48.450603] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:14.933 07:26:48 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:26:14.933 07:26:48 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:26:14.933 07:26:48 -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:26:15.192 [2024-02-13 07:26:48.646831] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:15.192 07:26:48 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 138345 0 00:26:15.192 07:26:48 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 138345 0 idle 00:26:15.192 07:26:48 -- interrupt/interrupt_common.sh@33 -- # local pid=138345 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 138345 -w 256 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 138345 root 20 0 20.1t 143500 28888 S 0.0 1.2 0:01.82 reactor_0' 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@48 -- # echo 138345 root 20 0 20.1t 143500 28888 S 0.0 1.2 0:01.82 reactor_0 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:15.193 07:26:48 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:15.193 07:26:48 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:15.193 07:26:48 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:26:15.193 07:26:48 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:26:15.193 07:26:48 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 138345 00:26:15.193 07:26:48 -- common/autotest_common.sh@924 -- # '[' -z 138345 ']' 00:26:15.193 07:26:48 -- common/autotest_common.sh@928 -- # kill -0 138345 00:26:15.193 07:26:48 -- common/autotest_common.sh@929 -- # uname 00:26:15.193 07:26:48 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:26:15.193 07:26:48 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 138345 00:26:15.193 07:26:48 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:26:15.193 07:26:48 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:26:15.193 killing process with pid 138345 00:26:15.193 07:26:48 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 138345' 00:26:15.193 07:26:48 -- common/autotest_common.sh@943 -- # kill 138345 00:26:15.193 07:26:48 -- common/autotest_common.sh@948 -- # wait 138345 00:26:16.571 07:26:50 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:26:16.571 07:26:50 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:16.571 07:26:50 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:26:16.571 07:26:50 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.571 07:26:50 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:16.571 07:26:50 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=138499 00:26:16.571 07:26:50 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:16.571 07:26:50 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:16.571 07:26:50 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 138499 /var/tmp/spdk.sock 00:26:16.571 07:26:50 -- common/autotest_common.sh@817 -- # '[' -z 138499 ']' 00:26:16.571 07:26:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:16.571 07:26:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:16.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:16.571 07:26:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:16.571 07:26:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:16.571 07:26:50 -- common/autotest_common.sh@10 -- # set +x 00:26:16.571 [2024-02-13 07:26:50.141935] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:16.571 [2024-02-13 07:26:50.142116] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138499 ] 00:26:16.830 [2024-02-13 07:26:50.312154] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:16.830 [2024-02-13 07:26:50.517466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:16.830 [2024-02-13 07:26:50.517598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.830 [2024-02-13 07:26:50.517606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.397 [2024-02-13 07:26:50.808005] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:17.397 07:26:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:17.397 07:26:51 -- common/autotest_common.sh@850 -- # return 0 00:26:17.397 07:26:51 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:26:17.397 07:26:51 -- interrupt/interrupt_common.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:17.657 Malloc0 00:26:17.657 Malloc1 00:26:17.657 Malloc2 00:26:17.916 07:26:51 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:26:17.916 07:26:51 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:17.916 07:26:51 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:17.916 07:26:51 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:17.916 5000+0 records in 00:26:17.916 5000+0 records out 00:26:17.916 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0130575 s, 784 MB/s 00:26:17.916 07:26:51 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:18.175 AIO0 00:26:18.175 07:26:51 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 138499 00:26:18.175 07:26:51 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 138499 00:26:18.175 07:26:51 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=138499 00:26:18.175 07:26:51 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:26:18.175 07:26:51 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:26:18.175 07:26:51 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:26:18.175 07:26:51 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:26:18.175 07:26:51 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:18.175 07:26:51 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:26:18.175 07:26:51 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:18.175 07:26:51 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:18.175 07:26:51 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:18.434 07:26:51 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:26:18.434 07:26:51 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:26:18.434 07:26:51 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:26:18.434 07:26:51 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:26:18.434 07:26:51 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:26:18.434 07:26:51 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:26:18.434 07:26:51 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:18.434 07:26:51 -- interrupt/interrupt_common.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats 00:26:18.434 07:26:51 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:26:18.434 07:26:52 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:26:18.434 07:26:52 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:26:18.434 spdk_thread ids are 1 on reactor0. 00:26:18.434 07:26:52 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:26:18.434 07:26:52 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:18.434 07:26:52 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 138499 0 00:26:18.434 07:26:52 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 138499 0 idle 00:26:18.434 07:26:52 -- interrupt/interrupt_common.sh@33 -- # local pid=138499 00:26:18.434 07:26:52 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:18.434 07:26:52 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:18.434 07:26:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:18.434 07:26:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:18.434 07:26:52 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:18.434 07:26:52 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:18.434 07:26:52 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:18.434 07:26:52 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 138499 -w 256 00:26:18.434 07:26:52 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 138499 root 20 0 20.1t 145592 28604 S 0.0 1.2 0:00.72 reactor_0' 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@48 -- # echo 138499 root 20 0 20.1t 145592 28604 S 0.0 1.2 0:00.72 reactor_0 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:18.693 07:26:52 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:18.693 07:26:52 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 138499 1 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 138499 1 idle 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@33 -- # local pid=138499 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 138499 -w 256 00:26:18.693 07:26:52 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 138506 root 20 0 20.1t 145592 28604 S 0.0 1.2 0:00.00 reactor_1' 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@48 -- # echo 138506 root 20 0 20.1t 145592 28604 S 0.0 1.2 0:00.00 reactor_1 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:18.951 07:26:52 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:26:18.951 07:26:52 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 138499 2 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 138499 2 idle 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@33 -- # local pid=138499 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 138499 -w 256 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 138507 root 20 0 20.1t 145592 28604 S 0.0 1.2 0:00.00 reactor_2' 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@48 -- # echo 138507 root 20 0 20.1t 145592 28604 S 0.0 1.2 0:00.00 reactor_2 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:18.951 07:26:52 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:18.951 07:26:52 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:26:18.951 07:26:52 -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:26:19.209 [2024-02-13 07:26:52.844355] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:26:19.209 [2024-02-13 07:26:52.844643] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:26:19.209 [2024-02-13 07:26:52.844951] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:19.209 07:26:52 -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:26:19.467 [2024-02-13 07:26:53.100161] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:26:19.467 [2024-02-13 07:26:53.100585] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:19.467 07:26:53 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:19.467 07:26:53 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 138499 0 00:26:19.467 07:26:53 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 138499 0 busy 00:26:19.467 07:26:53 -- interrupt/interrupt_common.sh@33 -- # local pid=138499 00:26:19.467 07:26:53 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:19.467 07:26:53 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:19.467 07:26:53 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:19.467 07:26:53 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:19.467 07:26:53 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:19.467 07:26:53 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:19.467 07:26:53 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 138499 -w 256 00:26:19.467 07:26:53 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 138499 root 20 0 20.1t 145672 28604 R 99.9 1.2 0:01.16 reactor_0' 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@48 -- # echo 138499 root 20 0 20.1t 145672 28604 R 99.9 1.2 0:01.16 reactor_0 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:19.725 07:26:53 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:26:19.725 07:26:53 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 138499 2 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 138499 2 busy 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@33 -- # local pid=138499 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 138499 -w 256 00:26:19.725 07:26:53 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:19.984 07:26:53 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 138507 root 20 0 20.1t 145672 28604 R 99.9 1.2 0:00.34 reactor_2' 00:26:19.984 07:26:53 -- interrupt/interrupt_common.sh@48 -- # echo 138507 root 20 0 20.1t 145672 28604 R 99.9 1.2 0:00.34 reactor_2 00:26:19.984 07:26:53 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:19.984 07:26:53 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:19.984 07:26:53 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:26:19.984 07:26:53 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:26:19.984 07:26:53 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:26:19.984 07:26:53 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:26:19.984 07:26:53 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:26:19.984 07:26:53 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:19.984 07:26:53 -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:26:19.984 [2024-02-13 07:26:53.632511] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:26:19.984 [2024-02-13 07:26:53.632810] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:19.984 07:26:53 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:26:19.984 07:26:53 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 138499 2 00:26:19.984 07:26:53 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 138499 2 idle 00:26:19.985 07:26:53 -- interrupt/interrupt_common.sh@33 -- # local pid=138499 00:26:19.985 07:26:53 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:26:19.985 07:26:53 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:19.985 07:26:53 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:19.985 07:26:53 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:19.985 07:26:53 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:19.985 07:26:53 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:19.985 07:26:53 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:19.985 07:26:53 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 138499 -w 256 00:26:19.985 07:26:53 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:26:20.244 07:26:53 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 138507 root 20 0 20.1t 145736 28604 S 0.0 1.2 0:00.53 reactor_2' 00:26:20.244 07:26:53 -- interrupt/interrupt_common.sh@48 -- # echo 138507 root 20 0 20.1t 145736 28604 S 0.0 1.2 0:00.53 reactor_2 00:26:20.244 07:26:53 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:20.244 07:26:53 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:20.244 07:26:53 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:26:20.244 07:26:53 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:26:20.244 07:26:53 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:20.244 07:26:53 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:20.244 07:26:53 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:26:20.244 07:26:53 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:20.244 07:26:53 -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:26:20.503 [2024-02-13 07:26:54.028529] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:26:20.503 [2024-02-13 07:26:54.028984] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:26:20.503 [2024-02-13 07:26:54.029030] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:26:20.503 07:26:54 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:26:20.503 07:26:54 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 138499 0 00:26:20.503 07:26:54 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 138499 0 idle 00:26:20.503 07:26:54 -- interrupt/interrupt_common.sh@33 -- # local pid=138499 00:26:20.503 07:26:54 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:26:20.503 07:26:54 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:26:20.503 07:26:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:26:20.503 07:26:54 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:26:20.503 07:26:54 -- interrupt/interrupt_common.sh@41 -- # hash top 00:26:20.503 07:26:54 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:26:20.503 07:26:54 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:26:20.503 07:26:54 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 138499 -w 256 00:26:20.503 07:26:54 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:26:20.762 07:26:54 -- interrupt/interrupt_common.sh@47 -- # top_reactor=' 138499 root 20 0 20.1t 145776 28604 S 6.2 1.2 0:01.92 reactor_0' 00:26:20.762 07:26:54 -- interrupt/interrupt_common.sh@48 -- # echo 138499 root 20 0 20.1t 145776 28604 S 6.2 1.2 0:01.92 reactor_0 00:26:20.762 07:26:54 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:26:20.762 07:26:54 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:26:20.762 07:26:54 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.2 00:26:20.762 07:26:54 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:26:20.762 07:26:54 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:26:20.762 07:26:54 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:26:20.762 07:26:54 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:26:20.762 07:26:54 -- interrupt/interrupt_common.sh@56 -- # return 0 00:26:20.762 07:26:54 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:26:20.762 07:26:54 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:26:20.762 07:26:54 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:20.762 07:26:54 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 138499 00:26:20.762 07:26:54 -- common/autotest_common.sh@924 -- # '[' -z 138499 ']' 00:26:20.762 07:26:54 -- common/autotest_common.sh@928 -- # kill -0 138499 00:26:20.762 07:26:54 -- common/autotest_common.sh@929 -- # uname 00:26:20.762 07:26:54 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:26:20.762 07:26:54 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 138499 00:26:20.762 killing process with pid 138499 00:26:20.762 07:26:54 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:26:20.762 07:26:54 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:26:20.762 07:26:54 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 138499' 00:26:20.762 07:26:54 -- common/autotest_common.sh@943 -- # kill 138499 00:26:20.762 07:26:54 -- common/autotest_common.sh@948 -- # wait 138499 00:26:22.142 07:26:55 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:26:22.142 07:26:55 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:22.142 00:26:22.142 real 0m11.246s 00:26:22.142 user 0m11.507s 00:26:22.142 sys 0m1.416s 00:26:22.142 07:26:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:22.142 07:26:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.142 ************************************ 00:26:22.142 END TEST reactor_set_interrupt 00:26:22.142 ************************************ 00:26:22.142 07:26:55 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:22.142 07:26:55 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:26:22.142 07:26:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:26:22.142 07:26:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.142 ************************************ 00:26:22.142 START TEST reap_unregistered_poller 00:26:22.142 ************************************ 00:26:22.142 07:26:55 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:22.142 * Looking for test storage... 00:26:22.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:22.142 07:26:55 -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh 00:26:22.142 07:26:55 -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh 00:26:22.142 07:26:55 -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:22.142 07:26:55 -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:22.142 07:26:55 -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../.. 00:26:22.142 07:26:55 -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:22.142 07:26:55 -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:26:22.142 07:26:55 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:26:22.142 07:26:55 -- common/autotest_common.sh@34 -- # set -e 00:26:22.142 07:26:55 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:26:22.142 07:26:55 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:26:22.142 07:26:55 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:26:22.142 07:26:55 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:26:22.142 07:26:55 -- common/build_config.sh@1 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:26:22.142 07:26:55 -- common/build_config.sh@2 -- # CONFIG_FIO_PLUGIN=y 00:26:22.142 07:26:55 -- common/build_config.sh@3 -- # CONFIG_NVME_CUSE=y 00:26:22.142 07:26:55 -- common/build_config.sh@4 -- # CONFIG_RAID5F=y 00:26:22.142 07:26:55 -- common/build_config.sh@5 -- # CONFIG_LTO=n 00:26:22.142 07:26:55 -- common/build_config.sh@6 -- # CONFIG_SMA=n 00:26:22.142 07:26:55 -- common/build_config.sh@7 -- # CONFIG_ISAL=y 00:26:22.142 07:26:55 -- common/build_config.sh@8 -- # CONFIG_OPENSSL_PATH= 00:26:22.142 07:26:55 -- common/build_config.sh@9 -- # CONFIG_IDXD_KERNEL=n 00:26:22.142 07:26:55 -- common/build_config.sh@10 -- # CONFIG_URING_PATH= 00:26:22.142 07:26:55 -- common/build_config.sh@11 -- # CONFIG_DAOS=n 00:26:22.142 07:26:55 -- common/build_config.sh@12 -- # CONFIG_DPDK_LIB_DIR= 00:26:22.142 07:26:55 -- common/build_config.sh@13 -- # CONFIG_OCF=n 00:26:22.142 07:26:55 -- common/build_config.sh@14 -- # CONFIG_EXAMPLES=y 00:26:22.142 07:26:55 -- common/build_config.sh@15 -- # CONFIG_RDMA_PROV=verbs 00:26:22.142 07:26:55 -- common/build_config.sh@16 -- # CONFIG_ISCSI_INITIATOR=y 00:26:22.142 07:26:55 -- common/build_config.sh@17 -- # CONFIG_VTUNE=n 00:26:22.142 07:26:55 -- common/build_config.sh@18 -- # CONFIG_DPDK_INC_DIR= 00:26:22.142 07:26:55 -- common/build_config.sh@19 -- # CONFIG_CET=n 00:26:22.142 07:26:55 -- common/build_config.sh@20 -- # CONFIG_TESTS=y 00:26:22.142 07:26:55 -- common/build_config.sh@21 -- # CONFIG_APPS=y 00:26:22.142 07:26:55 -- common/build_config.sh@22 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:26:22.142 07:26:55 -- common/build_config.sh@23 -- # CONFIG_DAOS_DIR= 00:26:22.142 07:26:55 -- common/build_config.sh@24 -- # CONFIG_CRYPTO_MLX5=n 00:26:22.142 07:26:55 -- common/build_config.sh@25 -- # CONFIG_XNVME=n 00:26:22.142 07:26:55 -- common/build_config.sh@26 -- # CONFIG_UNIT_TESTS=y 00:26:22.142 07:26:55 -- common/build_config.sh@27 -- # CONFIG_FUSE=n 00:26:22.142 07:26:55 -- common/build_config.sh@28 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:26:22.142 07:26:55 -- common/build_config.sh@29 -- # CONFIG_OCF_PATH= 00:26:22.142 07:26:55 -- common/build_config.sh@30 -- # CONFIG_WPDK_DIR= 00:26:22.142 07:26:55 -- common/build_config.sh@31 -- # CONFIG_VFIO_USER=n 00:26:22.142 07:26:55 -- common/build_config.sh@32 -- # CONFIG_MAX_LCORES= 00:26:22.142 07:26:55 -- common/build_config.sh@33 -- # CONFIG_ARCH=native 00:26:22.142 07:26:55 -- common/build_config.sh@34 -- # CONFIG_TSAN=n 00:26:22.142 07:26:55 -- common/build_config.sh@35 -- # CONFIG_VIRTIO=y 00:26:22.142 07:26:55 -- common/build_config.sh@36 -- # CONFIG_IPSEC_MB=n 00:26:22.142 07:26:55 -- common/build_config.sh@37 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:26:22.142 07:26:55 -- common/build_config.sh@38 -- # CONFIG_ASAN=y 00:26:22.142 07:26:55 -- common/build_config.sh@39 -- # CONFIG_SHARED=n 00:26:22.142 07:26:55 -- common/build_config.sh@40 -- # CONFIG_VTUNE_DIR= 00:26:22.142 07:26:55 -- common/build_config.sh@41 -- # CONFIG_RDMA_SET_TOS=y 00:26:22.142 07:26:55 -- common/build_config.sh@42 -- # CONFIG_VBDEV_COMPRESS=n 00:26:22.142 07:26:55 -- common/build_config.sh@43 -- # CONFIG_VFIO_USER_DIR= 00:26:22.142 07:26:55 -- common/build_config.sh@44 -- # CONFIG_FUZZER_LIB= 00:26:22.142 07:26:55 -- common/build_config.sh@45 -- # CONFIG_HAVE_EXECINFO_H=y 00:26:22.142 07:26:55 -- common/build_config.sh@46 -- # CONFIG_USDT=n 00:26:22.142 07:26:55 -- common/build_config.sh@47 -- # CONFIG_URING_ZNS=n 00:26:22.142 07:26:55 -- common/build_config.sh@48 -- # CONFIG_FC_PATH= 00:26:22.142 07:26:55 -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:26:22.142 07:26:55 -- common/build_config.sh@50 -- # CONFIG_CUSTOMOCF=n 00:26:22.142 07:26:55 -- common/build_config.sh@51 -- # CONFIG_DPDK_PKG_CONFIG=n 00:26:22.142 07:26:55 -- common/build_config.sh@52 -- # CONFIG_WERROR=y 00:26:22.142 07:26:55 -- common/build_config.sh@53 -- # CONFIG_DEBUG=y 00:26:22.142 07:26:55 -- common/build_config.sh@54 -- # CONFIG_RDMA=y 00:26:22.142 07:26:55 -- common/build_config.sh@55 -- # CONFIG_HAVE_ARC4RANDOM=n 00:26:22.142 07:26:55 -- common/build_config.sh@56 -- # CONFIG_FUZZER=n 00:26:22.142 07:26:55 -- common/build_config.sh@57 -- # CONFIG_FC=n 00:26:22.142 07:26:55 -- common/build_config.sh@58 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:26:22.142 07:26:55 -- common/build_config.sh@59 -- # CONFIG_HAVE_LIBARCHIVE=n 00:26:22.142 07:26:55 -- common/build_config.sh@60 -- # CONFIG_DPDK_COMPRESSDEV=n 00:26:22.142 07:26:55 -- common/build_config.sh@61 -- # CONFIG_CROSS_PREFIX= 00:26:22.142 07:26:55 -- common/build_config.sh@62 -- # CONFIG_PREFIX=/usr/local 00:26:22.143 07:26:55 -- common/build_config.sh@63 -- # CONFIG_HAVE_LIBBSD=n 00:26:22.143 07:26:55 -- common/build_config.sh@64 -- # CONFIG_UBSAN=y 00:26:22.143 07:26:55 -- common/build_config.sh@65 -- # CONFIG_PGO_CAPTURE=n 00:26:22.143 07:26:55 -- common/build_config.sh@66 -- # CONFIG_UBLK=n 00:26:22.143 07:26:55 -- common/build_config.sh@67 -- # CONFIG_ISAL_CRYPTO=y 00:26:22.143 07:26:55 -- common/build_config.sh@68 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:26:22.143 07:26:55 -- common/build_config.sh@69 -- # CONFIG_CRYPTO=n 00:26:22.143 07:26:55 -- common/build_config.sh@70 -- # CONFIG_RBD=n 00:26:22.143 07:26:55 -- common/build_config.sh@71 -- # CONFIG_LIBDIR= 00:26:22.143 07:26:55 -- common/build_config.sh@72 -- # CONFIG_IPSEC_MB_DIR= 00:26:22.143 07:26:55 -- common/build_config.sh@73 -- # CONFIG_PGO_USE=n 00:26:22.143 07:26:55 -- common/build_config.sh@74 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:22.143 07:26:55 -- common/build_config.sh@75 -- # CONFIG_GOLANG=n 00:26:22.143 07:26:55 -- common/build_config.sh@76 -- # CONFIG_VHOST=y 00:26:22.143 07:26:55 -- common/build_config.sh@77 -- # CONFIG_IDXD=y 00:26:22.143 07:26:55 -- common/build_config.sh@78 -- # CONFIG_AVAHI=n 00:26:22.143 07:26:55 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:26:22.143 07:26:55 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:22.143 07:26:55 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:26:22.143 07:26:55 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:26:22.143 07:26:55 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:26:22.143 07:26:55 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:26:22.143 07:26:55 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:26:22.143 07:26:55 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:26:22.143 07:26:55 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:26:22.143 07:26:55 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:26:22.143 07:26:55 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:26:22.143 07:26:55 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:26:22.143 07:26:55 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:26:22.143 07:26:55 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:26:22.143 07:26:55 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:26:22.143 07:26:55 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:26:22.143 07:26:55 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:26:22.143 #define SPDK_CONFIG_H 00:26:22.143 #define SPDK_CONFIG_APPS 1 00:26:22.143 #define SPDK_CONFIG_ARCH native 00:26:22.143 #define SPDK_CONFIG_ASAN 1 00:26:22.143 #undef SPDK_CONFIG_AVAHI 00:26:22.143 #undef SPDK_CONFIG_CET 00:26:22.143 #define SPDK_CONFIG_COVERAGE 1 00:26:22.143 #define SPDK_CONFIG_CROSS_PREFIX 00:26:22.143 #undef SPDK_CONFIG_CRYPTO 00:26:22.143 #undef SPDK_CONFIG_CRYPTO_MLX5 00:26:22.143 #undef SPDK_CONFIG_CUSTOMOCF 00:26:22.143 #undef SPDK_CONFIG_DAOS 00:26:22.143 #define SPDK_CONFIG_DAOS_DIR 00:26:22.143 #define SPDK_CONFIG_DEBUG 1 00:26:22.143 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:26:22.143 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:26:22.143 #define SPDK_CONFIG_DPDK_INC_DIR 00:26:22.143 #define SPDK_CONFIG_DPDK_LIB_DIR 00:26:22.143 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:26:22.143 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:26:22.143 #define SPDK_CONFIG_EXAMPLES 1 00:26:22.143 #undef SPDK_CONFIG_FC 00:26:22.143 #define SPDK_CONFIG_FC_PATH 00:26:22.143 #define SPDK_CONFIG_FIO_PLUGIN 1 00:26:22.143 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:26:22.143 #undef SPDK_CONFIG_FUSE 00:26:22.143 #undef SPDK_CONFIG_FUZZER 00:26:22.143 #define SPDK_CONFIG_FUZZER_LIB 00:26:22.143 #undef SPDK_CONFIG_GOLANG 00:26:22.143 #undef SPDK_CONFIG_HAVE_ARC4RANDOM 00:26:22.143 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:26:22.143 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:26:22.143 #undef SPDK_CONFIG_HAVE_LIBBSD 00:26:22.143 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:26:22.143 #define SPDK_CONFIG_IDXD 1 00:26:22.143 #undef SPDK_CONFIG_IDXD_KERNEL 00:26:22.143 #undef SPDK_CONFIG_IPSEC_MB 00:26:22.143 #define SPDK_CONFIG_IPSEC_MB_DIR 00:26:22.143 #define SPDK_CONFIG_ISAL 1 00:26:22.143 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:26:22.143 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:26:22.143 #define SPDK_CONFIG_LIBDIR 00:26:22.143 #undef SPDK_CONFIG_LTO 00:26:22.143 #define SPDK_CONFIG_MAX_LCORES 00:26:22.143 #define SPDK_CONFIG_NVME_CUSE 1 00:26:22.143 #undef SPDK_CONFIG_OCF 00:26:22.143 #define SPDK_CONFIG_OCF_PATH 00:26:22.143 #define SPDK_CONFIG_OPENSSL_PATH 00:26:22.143 #undef SPDK_CONFIG_PGO_CAPTURE 00:26:22.143 #undef SPDK_CONFIG_PGO_USE 00:26:22.143 #define SPDK_CONFIG_PREFIX /usr/local 00:26:22.143 #define SPDK_CONFIG_RAID5F 1 00:26:22.143 #undef SPDK_CONFIG_RBD 00:26:22.143 #define SPDK_CONFIG_RDMA 1 00:26:22.143 #define SPDK_CONFIG_RDMA_PROV verbs 00:26:22.143 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:26:22.143 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:26:22.143 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:26:22.143 #undef SPDK_CONFIG_SHARED 00:26:22.143 #undef SPDK_CONFIG_SMA 00:26:22.143 #define SPDK_CONFIG_TESTS 1 00:26:22.143 #undef SPDK_CONFIG_TSAN 00:26:22.143 #undef SPDK_CONFIG_UBLK 00:26:22.143 #define SPDK_CONFIG_UBSAN 1 00:26:22.143 #define SPDK_CONFIG_UNIT_TESTS 1 00:26:22.143 #undef SPDK_CONFIG_URING 00:26:22.143 #define SPDK_CONFIG_URING_PATH 00:26:22.143 #undef SPDK_CONFIG_URING_ZNS 00:26:22.143 #undef SPDK_CONFIG_USDT 00:26:22.143 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:26:22.143 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:26:22.143 #undef SPDK_CONFIG_VFIO_USER 00:26:22.143 #define SPDK_CONFIG_VFIO_USER_DIR 00:26:22.143 #define SPDK_CONFIG_VHOST 1 00:26:22.143 #define SPDK_CONFIG_VIRTIO 1 00:26:22.143 #undef SPDK_CONFIG_VTUNE 00:26:22.143 #define SPDK_CONFIG_VTUNE_DIR 00:26:22.143 #define SPDK_CONFIG_WERROR 1 00:26:22.143 #define SPDK_CONFIG_WPDK_DIR 00:26:22.143 #undef SPDK_CONFIG_XNVME 00:26:22.143 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:26:22.143 07:26:55 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:26:22.143 07:26:55 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:22.143 07:26:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:22.143 07:26:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:22.143 07:26:55 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:22.143 07:26:55 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:26:22.143 07:26:55 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:22.143 07:26:55 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:26:22.143 07:26:55 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:26:22.143 07:26:55 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:26:22.143 07:26:55 -- pm/common@16 -- # TEST_TAG=N/A 00:26:22.143 07:26:55 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:26:22.143 07:26:55 -- common/autotest_common.sh@52 -- # : 1 00:26:22.143 07:26:55 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:26:22.143 07:26:55 -- common/autotest_common.sh@56 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:26:22.143 07:26:55 -- common/autotest_common.sh@58 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:26:22.143 07:26:55 -- common/autotest_common.sh@60 -- # : 1 00:26:22.143 07:26:55 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:26:22.143 07:26:55 -- common/autotest_common.sh@62 -- # : 1 00:26:22.143 07:26:55 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:26:22.143 07:26:55 -- common/autotest_common.sh@64 -- # : 00:26:22.143 07:26:55 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:26:22.143 07:26:55 -- common/autotest_common.sh@66 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:26:22.143 07:26:55 -- common/autotest_common.sh@68 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:26:22.143 07:26:55 -- common/autotest_common.sh@70 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:26:22.143 07:26:55 -- common/autotest_common.sh@72 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:26:22.143 07:26:55 -- common/autotest_common.sh@74 -- # : 1 00:26:22.143 07:26:55 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:26:22.143 07:26:55 -- common/autotest_common.sh@76 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:26:22.143 07:26:55 -- common/autotest_common.sh@78 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:26:22.143 07:26:55 -- common/autotest_common.sh@80 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:26:22.143 07:26:55 -- common/autotest_common.sh@82 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:26:22.143 07:26:55 -- common/autotest_common.sh@84 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:26:22.143 07:26:55 -- common/autotest_common.sh@86 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:26:22.143 07:26:55 -- common/autotest_common.sh@88 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:26:22.143 07:26:55 -- common/autotest_common.sh@90 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:26:22.143 07:26:55 -- common/autotest_common.sh@92 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:26:22.143 07:26:55 -- common/autotest_common.sh@94 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:26:22.143 07:26:55 -- common/autotest_common.sh@96 -- # : rdma 00:26:22.143 07:26:55 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:26:22.143 07:26:55 -- common/autotest_common.sh@98 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:26:22.143 07:26:55 -- common/autotest_common.sh@100 -- # : 0 00:26:22.143 07:26:55 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:26:22.143 07:26:55 -- common/autotest_common.sh@102 -- # : 1 00:26:22.143 07:26:55 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:26:22.144 07:26:55 -- common/autotest_common.sh@104 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:26:22.144 07:26:55 -- common/autotest_common.sh@106 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:26:22.144 07:26:55 -- common/autotest_common.sh@108 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:26:22.144 07:26:55 -- common/autotest_common.sh@110 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:26:22.144 07:26:55 -- common/autotest_common.sh@112 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:26:22.144 07:26:55 -- common/autotest_common.sh@114 -- # : 1 00:26:22.144 07:26:55 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:26:22.144 07:26:55 -- common/autotest_common.sh@116 -- # : 1 00:26:22.144 07:26:55 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:26:22.144 07:26:55 -- common/autotest_common.sh@118 -- # : 00:26:22.144 07:26:55 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:26:22.144 07:26:55 -- common/autotest_common.sh@120 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:26:22.144 07:26:55 -- common/autotest_common.sh@122 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:26:22.144 07:26:55 -- common/autotest_common.sh@124 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:26:22.144 07:26:55 -- common/autotest_common.sh@126 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:26:22.144 07:26:55 -- common/autotest_common.sh@128 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:26:22.144 07:26:55 -- common/autotest_common.sh@130 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:26:22.144 07:26:55 -- common/autotest_common.sh@132 -- # : 00:26:22.144 07:26:55 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:26:22.144 07:26:55 -- common/autotest_common.sh@134 -- # : true 00:26:22.144 07:26:55 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:26:22.144 07:26:55 -- common/autotest_common.sh@136 -- # : 1 00:26:22.144 07:26:55 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:26:22.144 07:26:55 -- common/autotest_common.sh@138 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:26:22.144 07:26:55 -- common/autotest_common.sh@140 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:26:22.144 07:26:55 -- common/autotest_common.sh@142 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:26:22.144 07:26:55 -- common/autotest_common.sh@144 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:26:22.144 07:26:55 -- common/autotest_common.sh@146 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:26:22.144 07:26:55 -- common/autotest_common.sh@148 -- # : 00:26:22.144 07:26:55 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:26:22.144 07:26:55 -- common/autotest_common.sh@150 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:26:22.144 07:26:55 -- common/autotest_common.sh@152 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:26:22.144 07:26:55 -- common/autotest_common.sh@154 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:26:22.144 07:26:55 -- common/autotest_common.sh@156 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:26:22.144 07:26:55 -- common/autotest_common.sh@158 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:26:22.144 07:26:55 -- common/autotest_common.sh@161 -- # : 00:26:22.144 07:26:55 -- common/autotest_common.sh@162 -- # export SPDK_TEST_FUZZER_TARGET 00:26:22.144 07:26:55 -- common/autotest_common.sh@163 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@164 -- # export SPDK_TEST_NVMF_MDNS 00:26:22.144 07:26:55 -- common/autotest_common.sh@165 -- # : 0 00:26:22.144 07:26:55 -- common/autotest_common.sh@166 -- # export SPDK_JSONRPC_GO_CLIENT 00:26:22.144 07:26:55 -- common/autotest_common.sh@169 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:22.144 07:26:55 -- common/autotest_common.sh@169 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:26:22.144 07:26:55 -- common/autotest_common.sh@170 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:22.144 07:26:55 -- common/autotest_common.sh@170 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:26:22.144 07:26:55 -- common/autotest_common.sh@171 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:22.144 07:26:55 -- common/autotest_common.sh@171 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:22.144 07:26:55 -- common/autotest_common.sh@172 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:22.144 07:26:55 -- common/autotest_common.sh@172 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:26:22.144 07:26:55 -- common/autotest_common.sh@175 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:26:22.144 07:26:55 -- common/autotest_common.sh@175 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:26:22.144 07:26:55 -- common/autotest_common.sh@179 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:22.144 07:26:55 -- common/autotest_common.sh@179 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:26:22.144 07:26:55 -- common/autotest_common.sh@183 -- # export PYTHONDONTWRITEBYTECODE=1 00:26:22.144 07:26:55 -- common/autotest_common.sh@183 -- # PYTHONDONTWRITEBYTECODE=1 00:26:22.144 07:26:55 -- common/autotest_common.sh@187 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:22.144 07:26:55 -- common/autotest_common.sh@187 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:26:22.144 07:26:55 -- common/autotest_common.sh@188 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:22.144 07:26:55 -- common/autotest_common.sh@188 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:26:22.144 07:26:55 -- common/autotest_common.sh@192 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:26:22.144 07:26:55 -- common/autotest_common.sh@193 -- # rm -rf /var/tmp/asan_suppression_file 00:26:22.144 07:26:55 -- common/autotest_common.sh@194 -- # cat 00:26:22.144 07:26:55 -- common/autotest_common.sh@220 -- # echo leak:libfuse3.so 00:26:22.144 07:26:55 -- common/autotest_common.sh@222 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:22.144 07:26:55 -- common/autotest_common.sh@222 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:26:22.144 07:26:55 -- common/autotest_common.sh@224 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:22.144 07:26:55 -- common/autotest_common.sh@224 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:26:22.144 07:26:55 -- common/autotest_common.sh@226 -- # '[' -z /var/spdk/dependencies ']' 00:26:22.144 07:26:55 -- common/autotest_common.sh@229 -- # export DEPENDENCY_DIR 00:26:22.144 07:26:55 -- common/autotest_common.sh@233 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:22.144 07:26:55 -- common/autotest_common.sh@233 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:26:22.144 07:26:55 -- common/autotest_common.sh@234 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:22.144 07:26:55 -- common/autotest_common.sh@234 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:26:22.144 07:26:55 -- common/autotest_common.sh@237 -- # export QEMU_BIN= 00:26:22.144 07:26:55 -- common/autotest_common.sh@237 -- # QEMU_BIN= 00:26:22.144 07:26:55 -- common/autotest_common.sh@238 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:22.144 07:26:55 -- common/autotest_common.sh@238 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:26:22.144 07:26:55 -- common/autotest_common.sh@240 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:22.144 07:26:55 -- common/autotest_common.sh@240 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:26:22.144 07:26:55 -- common/autotest_common.sh@243 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:22.144 07:26:55 -- common/autotest_common.sh@243 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:26:22.144 07:26:55 -- common/autotest_common.sh@246 -- # '[' 0 -eq 0 ']' 00:26:22.144 07:26:55 -- common/autotest_common.sh@247 -- # export valgrind= 00:26:22.144 07:26:55 -- common/autotest_common.sh@247 -- # valgrind= 00:26:22.144 07:26:55 -- common/autotest_common.sh@253 -- # uname -s 00:26:22.144 07:26:55 -- common/autotest_common.sh@253 -- # '[' Linux = Linux ']' 00:26:22.144 07:26:55 -- common/autotest_common.sh@254 -- # HUGEMEM=4096 00:26:22.144 07:26:55 -- common/autotest_common.sh@255 -- # export CLEAR_HUGE=yes 00:26:22.144 07:26:55 -- common/autotest_common.sh@255 -- # CLEAR_HUGE=yes 00:26:22.144 07:26:55 -- common/autotest_common.sh@256 -- # [[ 0 -eq 1 ]] 00:26:22.144 07:26:55 -- common/autotest_common.sh@256 -- # [[ 0 -eq 1 ]] 00:26:22.144 07:26:55 -- common/autotest_common.sh@263 -- # MAKE=make 00:26:22.144 07:26:55 -- common/autotest_common.sh@264 -- # MAKEFLAGS=-j10 00:26:22.144 07:26:55 -- common/autotest_common.sh@280 -- # export HUGEMEM=4096 00:26:22.144 07:26:55 -- common/autotest_common.sh@280 -- # HUGEMEM=4096 00:26:22.144 07:26:55 -- common/autotest_common.sh@282 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:26:22.144 07:26:55 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:26:22.144 07:26:55 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:26:22.144 07:26:55 -- common/autotest_common.sh@307 -- # [[ -z 138701 ]] 00:26:22.144 07:26:55 -- common/autotest_common.sh@307 -- # kill -0 138701 00:26:22.144 07:26:55 -- common/autotest_common.sh@1663 -- # set_test_storage 2147483648 00:26:22.144 07:26:55 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:26:22.144 07:26:55 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:26:22.144 07:26:55 -- common/autotest_common.sh@320 -- # local mount target_dir 00:26:22.144 07:26:55 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:26:22.144 07:26:55 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:26:22.144 07:26:55 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:26:22.144 07:26:55 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:26:22.144 07:26:55 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.jCglOa 00:26:22.145 07:26:55 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:26:22.145 07:26:55 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:26:22.145 07:26:55 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:26:22.145 07:26:55 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.jCglOa/tests/interrupt /tmp/spdk.jCglOa 00:26:22.145 07:26:55 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:26:22.145 07:26:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:22.145 07:26:55 -- common/autotest_common.sh@316 -- # df -T 00:26:22.145 07:26:55 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=udev 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=6230982656 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6230982656 00:26:22.145 07:26:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:26:22.145 07:26:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=1250992128 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1255759872 00:26:22.145 07:26:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=4767744 00:26:22.145 07:26:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda1 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=11009368064 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20616794112 00:26:22.145 07:26:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=9590648832 00:26:22.145 07:26:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=6276194304 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6278787072 00:26:22.145 07:26:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=2592768 00:26:22.145 07:26:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=5242880 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5242880 00:26:22.145 07:26:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:26:22.145 07:26:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=6278787072 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6278787072 00:26:22.145 07:26:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:26:22.145 07:26:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop0 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=66453504 00:26:22.145 07:26:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=66453504 00:26:22.145 07:26:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop1 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=96337920 00:26:22.145 07:26:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=96337920 00:26:22.145 07:26:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop2 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=52297728 00:26:22.145 07:26:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=52297728 00:26:22.145 07:26:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda3 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=98705408 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=109422592 00:26:22.145 07:26:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=10718208 00:26:22.145 07:26:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=1255755776 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1255755776 00:26:22.145 07:26:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:26:22.145 07:26:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu20-vg-autotest/ubuntu2004-libvirt/output 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=97961316352 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:26:22.145 07:26:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=1741463552 00:26:22.145 07:26:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop3 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=42467328 00:26:22.145 07:26:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=42467328 00:26:22.145 07:26:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/loop4 00:26:22.145 07:26:55 -- common/autotest_common.sh@350 -- # fss["$mount"]=squashfs 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # avails["$mount"]=0 00:26:22.145 07:26:55 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:26:22.145 07:26:55 -- common/autotest_common.sh@352 -- # uses["$mount"]=67108864 00:26:22.145 07:26:55 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:26:22.145 07:26:55 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:26:22.145 * Looking for test storage... 00:26:22.145 07:26:55 -- common/autotest_common.sh@357 -- # local target_space new_size 00:26:22.145 07:26:55 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:26:22.145 07:26:55 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:22.145 07:26:55 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:26:22.145 07:26:55 -- common/autotest_common.sh@361 -- # mount=/ 00:26:22.145 07:26:55 -- common/autotest_common.sh@363 -- # target_space=11009368064 00:26:22.145 07:26:55 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:26:22.145 07:26:55 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:26:22.145 07:26:55 -- common/autotest_common.sh@369 -- # [[ ext4 == tmpfs ]] 00:26:22.145 07:26:55 -- common/autotest_common.sh@369 -- # [[ ext4 == ramfs ]] 00:26:22.145 07:26:55 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:26:22.145 07:26:55 -- common/autotest_common.sh@370 -- # new_size=11805241344 00:26:22.145 07:26:55 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:26:22.145 07:26:55 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:22.145 07:26:55 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt 00:26:22.145 07:26:55 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:22.145 * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt 00:26:22.145 07:26:55 -- common/autotest_common.sh@378 -- # return 0 00:26:22.145 07:26:55 -- common/autotest_common.sh@1665 -- # set -o errtrace 00:26:22.145 07:26:55 -- common/autotest_common.sh@1666 -- # shopt -s extdebug 00:26:22.145 07:26:55 -- common/autotest_common.sh@1667 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:26:22.145 07:26:55 -- common/autotest_common.sh@1669 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:26:22.145 07:26:55 -- common/autotest_common.sh@1670 -- # true 00:26:22.145 07:26:55 -- common/autotest_common.sh@1672 -- # xtrace_fd 00:26:22.145 07:26:55 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:26:22.145 07:26:55 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:26:22.145 07:26:55 -- common/autotest_common.sh@27 -- # exec 00:26:22.145 07:26:55 -- common/autotest_common.sh@29 -- # exec 00:26:22.145 07:26:55 -- common/autotest_common.sh@31 -- # xtrace_restore 00:26:22.145 07:26:55 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:26:22.145 07:26:55 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:26:22.145 07:26:55 -- common/autotest_common.sh@18 -- # set -x 00:26:22.145 07:26:55 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:22.145 07:26:55 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:26:22.145 07:26:55 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:26:22.145 07:26:55 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:26:22.145 07:26:55 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:26:22.145 07:26:55 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:26:22.145 07:26:55 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:22.146 07:26:55 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt 00:26:22.146 07:26:55 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:26:22.146 07:26:55 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.146 07:26:55 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:26:22.146 07:26:55 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=138741 00:26:22.146 07:26:55 -- interrupt/interrupt_common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:26:22.146 07:26:55 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:22.146 07:26:55 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 138741 /var/tmp/spdk.sock 00:26:22.146 07:26:55 -- common/autotest_common.sh@817 -- # '[' -z 138741 ']' 00:26:22.146 07:26:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:22.146 07:26:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:22.146 07:26:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:22.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:22.146 07:26:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:22.146 07:26:55 -- common/autotest_common.sh@10 -- # set +x 00:26:22.146 [2024-02-13 07:26:55.747352] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:22.146 [2024-02-13 07:26:55.748084] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138741 ] 00:26:22.404 [2024-02-13 07:26:55.912659] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:22.710 [2024-02-13 07:26:56.109463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.710 [2024-02-13 07:26:56.109599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:22.710 [2024-02-13 07:26:56.109828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.710 [2024-02-13 07:26:56.396439] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:22.969 07:26:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:22.969 07:26:56 -- common/autotest_common.sh@850 -- # return 0 00:26:22.969 07:26:56 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:26:22.969 07:26:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.969 07:26:56 -- common/autotest_common.sh@10 -- # set +x 00:26:22.969 07:26:56 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:26:23.228 07:26:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.228 07:26:56 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:26:23.228 "name": "app_thread", 00:26:23.228 "id": 1, 00:26:23.228 "active_pollers": [], 00:26:23.228 "timed_pollers": [ 00:26:23.228 { 00:26:23.228 "name": "rpc_subsystem_poll_servers", 00:26:23.228 "id": 1, 00:26:23.228 "state": "waiting", 00:26:23.228 "run_count": 0, 00:26:23.228 "busy_count": 0, 00:26:23.228 "period_ticks": 8800000 00:26:23.228 } 00:26:23.228 ], 00:26:23.228 "paused_pollers": [] 00:26:23.228 }' 00:26:23.228 07:26:56 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:26:23.228 07:26:56 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:26:23.228 07:26:56 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:26:23.228 07:26:56 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:26:23.228 07:26:56 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers 00:26:23.228 07:26:56 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:26:23.228 07:26:56 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:26:23.228 07:26:56 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:26:23.228 07:26:56 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000 00:26:23.228 5000+0 records in 00:26:23.228 5000+0 records out 00:26:23.228 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0127541 s, 803 MB/s 00:26:23.228 07:26:56 -- interrupt/interrupt_common.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048 00:26:23.486 AIO0 00:26:23.487 07:26:57 -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:26:23.745 07:26:57 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:26:23.745 07:26:57 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:26:23.745 07:26:57 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:26:23.745 07:26:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.745 07:26:57 -- common/autotest_common.sh@10 -- # set +x 00:26:23.745 07:26:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:24.004 07:26:57 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:26:24.004 "name": "app_thread", 00:26:24.004 "id": 1, 00:26:24.004 "active_pollers": [], 00:26:24.004 "timed_pollers": [ 00:26:24.004 { 00:26:24.004 "name": "rpc_subsystem_poll_servers", 00:26:24.004 "id": 1, 00:26:24.004 "state": "waiting", 00:26:24.004 "run_count": 0, 00:26:24.004 "busy_count": 0, 00:26:24.004 "period_ticks": 8800000 00:26:24.004 } 00:26:24.004 ], 00:26:24.004 "paused_pollers": [] 00:26:24.004 }' 00:26:24.004 07:26:57 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:26:24.004 07:26:57 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:26:24.004 07:26:57 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:26:24.004 07:26:57 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:26:24.004 07:26:57 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers 00:26:24.004 07:26:57 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]] 00:26:24.004 07:26:57 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:26:24.004 07:26:57 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 138741 00:26:24.004 07:26:57 -- common/autotest_common.sh@924 -- # '[' -z 138741 ']' 00:26:24.004 07:26:57 -- common/autotest_common.sh@928 -- # kill -0 138741 00:26:24.004 07:26:57 -- common/autotest_common.sh@929 -- # uname 00:26:24.004 07:26:57 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:26:24.004 07:26:57 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 138741 00:26:24.004 07:26:57 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:26:24.004 07:26:57 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:26:24.004 killing process with pid 138741 00:26:24.004 07:26:57 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 138741' 00:26:24.004 07:26:57 -- common/autotest_common.sh@943 -- # kill 138741 00:26:24.004 07:26:57 -- common/autotest_common.sh@948 -- # wait 138741 00:26:24.939 07:26:58 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:26:24.939 07:26:58 -- interrupt/interrupt_common.sh@19 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile 00:26:24.939 00:26:24.939 real 0m3.096s 00:26:24.939 user 0m2.597s 00:26:24.939 sys 0m0.470s 00:26:24.939 07:26:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:24.939 ************************************ 00:26:24.939 END TEST reap_unregistered_poller 00:26:24.939 ************************************ 00:26:24.939 07:26:58 -- common/autotest_common.sh@10 -- # set +x 00:26:25.196 07:26:58 -- spdk/autotest.sh@204 -- # uname -s 00:26:25.196 07:26:58 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:26:25.196 07:26:58 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:26:25.196 07:26:58 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:26:25.196 07:26:58 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:26:25.196 07:26:58 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:26:25.196 07:26:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:26:25.196 07:26:58 -- common/autotest_common.sh@10 -- # set +x 00:26:25.196 ************************************ 00:26:25.196 START TEST spdk_dd 00:26:25.196 ************************************ 00:26:25.196 07:26:58 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:26:25.196 * Looking for test storage... 00:26:25.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:25.196 07:26:58 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:25.196 07:26:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.196 07:26:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.196 07:26:58 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:25.454 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:26:25.454 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:26:26.828 07:27:00 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:26:26.828 07:27:00 -- dd/dd.sh@11 -- # nvme_in_userspace 00:26:26.828 07:27:00 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:26.828 07:27:00 -- scripts/common.sh@312 -- # local nvmes 00:26:26.828 07:27:00 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:26.828 07:27:00 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:26.828 07:27:00 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:26.828 07:27:00 -- scripts/common.sh@297 -- # local bdf= 00:26:26.828 07:27:00 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:26.828 07:27:00 -- scripts/common.sh@232 -- # local class 00:26:26.828 07:27:00 -- scripts/common.sh@233 -- # local subclass 00:26:26.828 07:27:00 -- scripts/common.sh@234 -- # local progif 00:26:26.828 07:27:00 -- scripts/common.sh@235 -- # printf %02x 1 00:26:26.828 07:27:00 -- scripts/common.sh@235 -- # class=01 00:26:26.828 07:27:00 -- scripts/common.sh@236 -- # printf %02x 8 00:26:26.828 07:27:00 -- scripts/common.sh@236 -- # subclass=08 00:26:26.828 07:27:00 -- scripts/common.sh@237 -- # printf %02x 2 00:26:26.828 07:27:00 -- scripts/common.sh@237 -- # progif=02 00:26:26.828 07:27:00 -- scripts/common.sh@239 -- # hash lspci 00:26:26.828 07:27:00 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:26.829 07:27:00 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:26.829 07:27:00 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:26.829 07:27:00 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:26.829 07:27:00 -- scripts/common.sh@244 -- # tr -d '"' 00:26:26.829 07:27:00 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:26.829 07:27:00 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:26.829 07:27:00 -- scripts/common.sh@15 -- # local i 00:26:26.829 07:27:00 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:26.829 07:27:00 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:26.829 07:27:00 -- scripts/common.sh@24 -- # return 0 00:26:26.829 07:27:00 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:26.829 07:27:00 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:26.829 07:27:00 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:26.829 07:27:00 -- scripts/common.sh@322 -- # uname -s 00:26:26.829 07:27:00 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:26.829 07:27:00 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:26.829 07:27:00 -- scripts/common.sh@327 -- # (( 1 )) 00:26:26.829 07:27:00 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 00:26:26.829 07:27:00 -- dd/dd.sh@13 -- # check_liburing 00:26:26.829 07:27:00 -- dd/common.sh@139 -- # local lib so 00:26:26.829 07:27:00 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:26:26.829 07:27:00 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libasan.so.5 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libdl.so.2 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ librt.so.1 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libssl.so.1.1 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libcrypto.so.1.1 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libpthread.so.0 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libgcc_s.so.1 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libnl-route-3.so.200 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libnl-3.so.200 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/common.sh@143 -- # [[ libstdc++.so.6 == liburing.so.* ]] 00:26:26.829 07:27:00 -- dd/common.sh@142 -- # read -r lib _ so _ 00:26:26.829 07:27:00 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:26:26.829 07:27:00 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:26:26.829 07:27:00 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:26:26.829 07:27:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:26:26.829 07:27:00 -- common/autotest_common.sh@10 -- # set +x 00:26:26.829 ************************************ 00:26:26.829 START TEST spdk_dd_basic_rw 00:26:26.829 ************************************ 00:26:26.829 07:27:00 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 00:26:26.829 * Looking for test storage... 00:26:26.829 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:26:26.829 07:27:00 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:26.829 07:27:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.829 07:27:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.829 07:27:00 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:26:26.829 07:27:00 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:26:26.829 07:27:00 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:26:26.829 07:27:00 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:26:26.829 07:27:00 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:26:26.829 07:27:00 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:26:26.829 07:27:00 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:26:26.829 07:27:00 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:26:26.829 07:27:00 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:26.829 07:27:00 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:26:26.829 07:27:00 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:26:26.829 07:27:00 -- dd/common.sh@126 -- # mapfile -t id 00:26:26.829 07:27:00 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:26:27.089 07:27:00 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 92 Data Units Written: 7 Host Read Commands: 2064 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:26:27.089 07:27:00 -- dd/common.sh@130 -- # lbaf=04 00:26:27.090 07:27:00 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 92 Data Units Written: 7 Host Read Commands: 2064 Host Write Commands: 113 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:26:27.090 07:27:00 -- dd/common.sh@132 -- # lbaf=4096 00:26:27.090 07:27:00 -- dd/common.sh@134 -- # echo 4096 00:26:27.090 07:27:00 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:26:27.090 07:27:00 -- dd/basic_rw.sh@96 -- # : 00:26:27.090 07:27:00 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:27.090 07:27:00 -- common/autotest_common.sh@1075 -- # '[' 8 -le 1 ']' 00:26:27.090 07:27:00 -- dd/basic_rw.sh@96 -- # gen_conf 00:26:27.090 07:27:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:26:27.090 07:27:00 -- common/autotest_common.sh@10 -- # set +x 00:26:27.090 07:27:00 -- dd/common.sh@31 -- # xtrace_disable 00:26:27.090 07:27:00 -- common/autotest_common.sh@10 -- # set +x 00:26:27.090 ************************************ 00:26:27.090 START TEST dd_bs_lt_native_bs 00:26:27.090 ************************************ 00:26:27.090 07:27:00 -- common/autotest_common.sh@1102 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:27.090 07:27:00 -- common/autotest_common.sh@638 -- # local es=0 00:26:27.090 07:27:00 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:27.090 07:27:00 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:27.090 07:27:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:27.090 07:27:00 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:27.090 07:27:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:27.090 07:27:00 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:27.090 07:27:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:27.090 07:27:00 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:27.090 07:27:00 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:26:27.090 07:27:00 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:26:27.090 { 00:26:27.090 "subsystems": [ 00:26:27.090 { 00:26:27.090 "subsystem": "bdev", 00:26:27.090 "config": [ 00:26:27.090 { 00:26:27.090 "params": { 00:26:27.090 "trtype": "pcie", 00:26:27.090 "traddr": "0000:00:06.0", 00:26:27.091 "name": "Nvme0" 00:26:27.091 }, 00:26:27.091 "method": "bdev_nvme_attach_controller" 00:26:27.091 }, 00:26:27.091 { 00:26:27.091 "method": "bdev_wait_for_examine" 00:26:27.091 } 00:26:27.091 ] 00:26:27.091 } 00:26:27.091 ] 00:26:27.091 } 00:26:27.091 [2024-02-13 07:27:00.760740] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:27.091 [2024-02-13 07:27:00.760904] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139050 ] 00:26:27.349 [2024-02-13 07:27:00.920530] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.607 [2024-02-13 07:27:01.156412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.607 [2024-02-13 07:27:01.156567] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:27.866 [2024-02-13 07:27:01.514622] spdk_dd.c:1146:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:26:27.866 [2024-02-13 07:27:01.514726] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:26:27.866 [2024-02-13 07:27:01.514857] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:28.801 [2024-02-13 07:27:02.157506] spdk_dd.c:1517:main: *ERROR*: Error occurred while performing copy 00:26:29.059 07:27:02 -- common/autotest_common.sh@641 -- # es=234 00:26:29.059 07:27:02 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:29.059 07:27:02 -- common/autotest_common.sh@650 -- # es=106 00:26:29.059 07:27:02 -- common/autotest_common.sh@651 -- # case "$es" in 00:26:29.059 07:27:02 -- common/autotest_common.sh@658 -- # es=1 00:26:29.059 07:27:02 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:29.059 00:26:29.059 real 0m1.833s 00:26:29.059 user 0m1.555s 00:26:29.059 sys 0m0.247s 00:26:29.059 07:27:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:29.059 07:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:29.059 ************************************ 00:26:29.059 END TEST dd_bs_lt_native_bs 00:26:29.059 ************************************ 00:26:29.059 07:27:02 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:26:29.059 07:27:02 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:26:29.059 07:27:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:26:29.059 07:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:29.059 ************************************ 00:26:29.059 START TEST dd_rw 00:26:29.059 ************************************ 00:26:29.059 07:27:02 -- common/autotest_common.sh@1102 -- # basic_rw 4096 00:26:29.059 07:27:02 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:26:29.059 07:27:02 -- dd/basic_rw.sh@12 -- # local count size 00:26:29.059 07:27:02 -- dd/basic_rw.sh@13 -- # local qds bss 00:26:29.059 07:27:02 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:26:29.059 07:27:02 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:29.059 07:27:02 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:29.059 07:27:02 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:29.059 07:27:02 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:29.059 07:27:02 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:26:29.059 07:27:02 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:26:29.059 07:27:02 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:26:29.059 07:27:02 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:29.059 07:27:02 -- dd/basic_rw.sh@23 -- # count=15 00:26:29.059 07:27:02 -- dd/basic_rw.sh@24 -- # count=15 00:26:29.059 07:27:02 -- dd/basic_rw.sh@25 -- # size=61440 00:26:29.059 07:27:02 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:26:29.059 07:27:02 -- dd/common.sh@98 -- # xtrace_disable 00:26:29.059 07:27:02 -- common/autotest_common.sh@10 -- # set +x 00:26:29.627 07:27:03 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:26:29.627 07:27:03 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:29.627 07:27:03 -- dd/common.sh@31 -- # xtrace_disable 00:26:29.627 07:27:03 -- common/autotest_common.sh@10 -- # set +x 00:26:29.627 [2024-02-13 07:27:03.166868] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:29.627 [2024-02-13 07:27:03.167021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139108 ] 00:26:29.627 { 00:26:29.627 "subsystems": [ 00:26:29.627 { 00:26:29.627 "subsystem": "bdev", 00:26:29.627 "config": [ 00:26:29.627 { 00:26:29.627 "params": { 00:26:29.627 "trtype": "pcie", 00:26:29.627 "traddr": "0000:00:06.0", 00:26:29.627 "name": "Nvme0" 00:26:29.627 }, 00:26:29.627 "method": "bdev_nvme_attach_controller" 00:26:29.627 }, 00:26:29.627 { 00:26:29.627 "method": "bdev_wait_for_examine" 00:26:29.627 } 00:26:29.627 ] 00:26:29.627 } 00:26:29.627 ] 00:26:29.627 } 00:26:29.627 [2024-02-13 07:27:03.320506] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.886 [2024-02-13 07:27:03.498792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.886 [2024-02-13 07:27:03.498918] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:30.453  Copying: 60/60 [kB] (average 19 MBps)[2024-02-13 07:27:03.854900] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:31.388 00:26:31.388 00:26:31.388 07:27:04 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:26:31.388 07:27:04 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:31.388 07:27:04 -- dd/common.sh@31 -- # xtrace_disable 00:26:31.388 07:27:04 -- common/autotest_common.sh@10 -- # set +x 00:26:31.388 [2024-02-13 07:27:04.852606] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:31.388 [2024-02-13 07:27:04.852758] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139135 ] 00:26:31.388 { 00:26:31.388 "subsystems": [ 00:26:31.388 { 00:26:31.388 "subsystem": "bdev", 00:26:31.388 "config": [ 00:26:31.388 { 00:26:31.388 "params": { 00:26:31.388 "trtype": "pcie", 00:26:31.388 "traddr": "0000:00:06.0", 00:26:31.388 "name": "Nvme0" 00:26:31.388 }, 00:26:31.388 "method": "bdev_nvme_attach_controller" 00:26:31.388 }, 00:26:31.388 { 00:26:31.388 "method": "bdev_wait_for_examine" 00:26:31.388 } 00:26:31.388 ] 00:26:31.388 } 00:26:31.388 ] 00:26:31.388 } 00:26:31.388 [2024-02-13 07:27:05.002355] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.647 [2024-02-13 07:27:05.180073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.647 [2024-02-13 07:27:05.180215] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:31.905  Copying: 60/60 [kB] (average 29 MBps)[2024-02-13 07:27:05.535838] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:33.282 00:26:33.282 00:26:33.282 07:27:06 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:33.282 07:27:06 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:26:33.282 07:27:06 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:33.282 07:27:06 -- dd/common.sh@11 -- # local nvme_ref= 00:26:33.282 07:27:06 -- dd/common.sh@12 -- # local size=61440 00:26:33.282 07:27:06 -- dd/common.sh@14 -- # local bs=1048576 00:26:33.282 07:27:06 -- dd/common.sh@15 -- # local count=1 00:26:33.282 07:27:06 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:33.282 07:27:06 -- dd/common.sh@18 -- # gen_conf 00:26:33.282 07:27:06 -- dd/common.sh@31 -- # xtrace_disable 00:26:33.282 07:27:06 -- common/autotest_common.sh@10 -- # set +x 00:26:33.282 { 00:26:33.282 "subsystems": [ 00:26:33.282 { 00:26:33.282 "subsystem": "bdev", 00:26:33.282 "config": [ 00:26:33.282 { 00:26:33.282 "params": { 00:26:33.282 "trtype": "pcie", 00:26:33.282 "traddr": "0000:00:06.0", 00:26:33.282 "name": "Nvme0" 00:26:33.282 }, 00:26:33.282 "method": "bdev_nvme_attach_controller" 00:26:33.282 }, 00:26:33.282 { 00:26:33.282 "method": "bdev_wait_for_examine" 00:26:33.282 } 00:26:33.282 ] 00:26:33.282 } 00:26:33.282 ] 00:26:33.282 } 00:26:33.282 [2024-02-13 07:27:06.648347] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:33.282 [2024-02-13 07:27:06.648555] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139180 ] 00:26:33.282 [2024-02-13 07:27:06.813387] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:33.540 [2024-02-13 07:27:06.987307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.540 [2024-02-13 07:27:06.987434] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:33.798  Copying: 1024/1024 [kB] (average 1000 MBps)[2024-02-13 07:27:07.340022] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:34.770 00:26:34.770 00:26:34.770 07:27:08 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:34.770 07:27:08 -- dd/basic_rw.sh@23 -- # count=15 00:26:34.770 07:27:08 -- dd/basic_rw.sh@24 -- # count=15 00:26:34.770 07:27:08 -- dd/basic_rw.sh@25 -- # size=61440 00:26:34.770 07:27:08 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:26:34.770 07:27:08 -- dd/common.sh@98 -- # xtrace_disable 00:26:34.770 07:27:08 -- common/autotest_common.sh@10 -- # set +x 00:26:35.338 07:27:08 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:26:35.338 07:27:08 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:35.338 07:27:08 -- dd/common.sh@31 -- # xtrace_disable 00:26:35.338 07:27:08 -- common/autotest_common.sh@10 -- # set +x 00:26:35.338 { 00:26:35.338 "subsystems": [ 00:26:35.338 { 00:26:35.338 "subsystem": "bdev", 00:26:35.338 "config": [ 00:26:35.338 { 00:26:35.338 "params": { 00:26:35.338 "trtype": "pcie", 00:26:35.338 "traddr": "0000:00:06.0", 00:26:35.338 "name": "Nvme0" 00:26:35.338 }, 00:26:35.338 "method": "bdev_nvme_attach_controller" 00:26:35.338 }, 00:26:35.338 { 00:26:35.338 "method": "bdev_wait_for_examine" 00:26:35.338 } 00:26:35.338 ] 00:26:35.338 } 00:26:35.338 ] 00:26:35.338 } 00:26:35.338 [2024-02-13 07:27:08.941319] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:35.338 [2024-02-13 07:27:08.941548] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139219 ] 00:26:35.596 [2024-02-13 07:27:09.109127] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.596 [2024-02-13 07:27:09.289695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:35.596 [2024-02-13 07:27:09.289819] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:36.164  Copying: 60/60 [kB] (average 58 MBps)[2024-02-13 07:27:09.643557] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:37.098 00:26:37.098 00:26:37.098 07:27:10 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:26:37.098 07:27:10 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:37.098 07:27:10 -- dd/common.sh@31 -- # xtrace_disable 00:26:37.098 07:27:10 -- common/autotest_common.sh@10 -- # set +x 00:26:37.098 [2024-02-13 07:27:10.757419] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:37.098 [2024-02-13 07:27:10.757595] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139240 ] 00:26:37.098 { 00:26:37.098 "subsystems": [ 00:26:37.098 { 00:26:37.098 "subsystem": "bdev", 00:26:37.098 "config": [ 00:26:37.098 { 00:26:37.098 "params": { 00:26:37.098 "trtype": "pcie", 00:26:37.098 "traddr": "0000:00:06.0", 00:26:37.098 "name": "Nvme0" 00:26:37.098 }, 00:26:37.098 "method": "bdev_nvme_attach_controller" 00:26:37.098 }, 00:26:37.098 { 00:26:37.098 "method": "bdev_wait_for_examine" 00:26:37.098 } 00:26:37.098 ] 00:26:37.098 } 00:26:37.098 ] 00:26:37.098 } 00:26:37.356 [2024-02-13 07:27:10.912402] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.615 [2024-02-13 07:27:11.092388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.615 [2024-02-13 07:27:11.092522] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:37.873  Copying: 60/60 [kB] (average 58 MBps)[2024-02-13 07:27:11.452254] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:38.808 00:26:38.808 00:26:38.808 07:27:12 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:38.808 07:27:12 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:26:38.808 07:27:12 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:38.808 07:27:12 -- dd/common.sh@11 -- # local nvme_ref= 00:26:38.808 07:27:12 -- dd/common.sh@12 -- # local size=61440 00:26:38.808 07:27:12 -- dd/common.sh@14 -- # local bs=1048576 00:26:38.808 07:27:12 -- dd/common.sh@15 -- # local count=1 00:26:38.808 07:27:12 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:38.808 07:27:12 -- dd/common.sh@18 -- # gen_conf 00:26:38.808 07:27:12 -- dd/common.sh@31 -- # xtrace_disable 00:26:38.808 07:27:12 -- common/autotest_common.sh@10 -- # set +x 00:26:38.808 { 00:26:38.808 "subsystems": [ 00:26:38.808 { 00:26:38.808 "subsystem": "bdev", 00:26:38.808 "config": [ 00:26:38.808 { 00:26:38.808 "params": { 00:26:38.808 "trtype": "pcie", 00:26:38.808 "traddr": "0000:00:06.0", 00:26:38.808 "name": "Nvme0" 00:26:38.808 }, 00:26:38.808 "method": "bdev_nvme_attach_controller" 00:26:38.808 }, 00:26:38.808 { 00:26:38.808 "method": "bdev_wait_for_examine" 00:26:38.808 } 00:26:38.808 ] 00:26:38.808 } 00:26:38.808 ] 00:26:38.808 } 00:26:38.808 [2024-02-13 07:27:12.495181] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:38.808 [2024-02-13 07:27:12.495374] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139272 ] 00:26:39.066 [2024-02-13 07:27:12.664314] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.324 [2024-02-13 07:27:12.857046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.324 [2024-02-13 07:27:12.857185] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:39.583  Copying: 1024/1024 [kB] (average 1000 MBps)[2024-02-13 07:27:13.211850] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:40.959 00:26:40.959 00:26:40.959 07:27:14 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:26:40.959 07:27:14 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:40.959 07:27:14 -- dd/basic_rw.sh@23 -- # count=7 00:26:40.959 07:27:14 -- dd/basic_rw.sh@24 -- # count=7 00:26:40.959 07:27:14 -- dd/basic_rw.sh@25 -- # size=57344 00:26:40.959 07:27:14 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:26:40.959 07:27:14 -- dd/common.sh@98 -- # xtrace_disable 00:26:40.959 07:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:41.218 07:27:14 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:26:41.218 07:27:14 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:41.218 07:27:14 -- dd/common.sh@31 -- # xtrace_disable 00:26:41.218 07:27:14 -- common/autotest_common.sh@10 -- # set +x 00:26:41.218 [2024-02-13 07:27:14.815001] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:41.218 [2024-02-13 07:27:14.815136] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139300 ] 00:26:41.218 { 00:26:41.218 "subsystems": [ 00:26:41.218 { 00:26:41.218 "subsystem": "bdev", 00:26:41.218 "config": [ 00:26:41.218 { 00:26:41.218 "params": { 00:26:41.218 "trtype": "pcie", 00:26:41.218 "traddr": "0000:00:06.0", 00:26:41.218 "name": "Nvme0" 00:26:41.218 }, 00:26:41.218 "method": "bdev_nvme_attach_controller" 00:26:41.218 }, 00:26:41.218 { 00:26:41.218 "method": "bdev_wait_for_examine" 00:26:41.218 } 00:26:41.218 ] 00:26:41.218 } 00:26:41.218 ] 00:26:41.218 } 00:26:41.476 [2024-02-13 07:27:14.969256] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.476 [2024-02-13 07:27:15.159970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.476 [2024-02-13 07:27:15.160103] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:42.044  Copying: 56/56 [kB] (average 54 MBps)[2024-02-13 07:27:15.514805] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:42.980 00:26:42.980 00:26:42.980 07:27:16 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:26:42.980 07:27:16 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:42.980 07:27:16 -- dd/common.sh@31 -- # xtrace_disable 00:26:42.980 07:27:16 -- common/autotest_common.sh@10 -- # set +x 00:26:42.980 { 00:26:42.980 "subsystems": [ 00:26:42.980 { 00:26:42.980 "subsystem": "bdev", 00:26:42.980 "config": [ 00:26:42.980 { 00:26:42.980 "params": { 00:26:42.980 "trtype": "pcie", 00:26:42.980 "traddr": "0000:00:06.0", 00:26:42.980 "name": "Nvme0" 00:26:42.980 }, 00:26:42.980 "method": "bdev_nvme_attach_controller" 00:26:42.980 }, 00:26:42.980 { 00:26:42.980 "method": "bdev_wait_for_examine" 00:26:42.980 } 00:26:42.980 ] 00:26:42.980 } 00:26:42.980 ] 00:26:42.980 } 00:26:42.980 [2024-02-13 07:27:16.556562] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:42.980 [2024-02-13 07:27:16.556760] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139351 ] 00:26:43.239 [2024-02-13 07:27:16.724678] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.239 [2024-02-13 07:27:16.906558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.239 [2024-02-13 07:27:16.906681] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:43.806  Copying: 56/56 [kB] (average 54 MBps)[2024-02-13 07:27:17.260009] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:44.740 00:26:44.740 00:26:44.740 07:27:18 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:44.740 07:27:18 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:26:44.740 07:27:18 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:44.740 07:27:18 -- dd/common.sh@11 -- # local nvme_ref= 00:26:44.740 07:27:18 -- dd/common.sh@12 -- # local size=57344 00:26:44.740 07:27:18 -- dd/common.sh@14 -- # local bs=1048576 00:26:44.740 07:27:18 -- dd/common.sh@15 -- # local count=1 00:26:44.740 07:27:18 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:44.740 07:27:18 -- dd/common.sh@18 -- # gen_conf 00:26:44.740 07:27:18 -- dd/common.sh@31 -- # xtrace_disable 00:26:44.740 07:27:18 -- common/autotest_common.sh@10 -- # set +x 00:26:44.740 { 00:26:44.740 "subsystems": [ 00:26:44.740 { 00:26:44.740 "subsystem": "bdev", 00:26:44.740 "config": [ 00:26:44.740 { 00:26:44.740 "params": { 00:26:44.740 "trtype": "pcie", 00:26:44.740 "traddr": "0000:00:06.0", 00:26:44.740 "name": "Nvme0" 00:26:44.740 }, 00:26:44.740 "method": "bdev_nvme_attach_controller" 00:26:44.740 }, 00:26:44.740 { 00:26:44.740 "method": "bdev_wait_for_examine" 00:26:44.740 } 00:26:44.740 ] 00:26:44.740 } 00:26:44.740 ] 00:26:44.740 } 00:26:44.740 [2024-02-13 07:27:18.385218] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:44.740 [2024-02-13 07:27:18.385413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139379 ] 00:26:44.998 [2024-02-13 07:27:18.550882] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.256 [2024-02-13 07:27:18.760838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.256 [2024-02-13 07:27:18.760972] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:45.522  Copying: 1024/1024 [kB] (average 1000 MBps)[2024-02-13 07:27:19.130055] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:46.470 00:26:46.470 00:26:46.470 07:27:20 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:46.470 07:27:20 -- dd/basic_rw.sh@23 -- # count=7 00:26:46.470 07:27:20 -- dd/basic_rw.sh@24 -- # count=7 00:26:46.470 07:27:20 -- dd/basic_rw.sh@25 -- # size=57344 00:26:46.470 07:27:20 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:26:46.470 07:27:20 -- dd/common.sh@98 -- # xtrace_disable 00:26:46.470 07:27:20 -- common/autotest_common.sh@10 -- # set +x 00:26:47.038 07:27:20 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:26:47.038 07:27:20 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:47.038 07:27:20 -- dd/common.sh@31 -- # xtrace_disable 00:26:47.038 07:27:20 -- common/autotest_common.sh@10 -- # set +x 00:26:47.038 [2024-02-13 07:27:20.710603] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:47.038 [2024-02-13 07:27:20.710755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139411 ] 00:26:47.038 { 00:26:47.038 "subsystems": [ 00:26:47.038 { 00:26:47.038 "subsystem": "bdev", 00:26:47.038 "config": [ 00:26:47.038 { 00:26:47.038 "params": { 00:26:47.038 "trtype": "pcie", 00:26:47.038 "traddr": "0000:00:06.0", 00:26:47.038 "name": "Nvme0" 00:26:47.038 }, 00:26:47.038 "method": "bdev_nvme_attach_controller" 00:26:47.038 }, 00:26:47.038 { 00:26:47.038 "method": "bdev_wait_for_examine" 00:26:47.038 } 00:26:47.038 ] 00:26:47.038 } 00:26:47.038 ] 00:26:47.038 } 00:26:47.297 [2024-02-13 07:27:20.864916] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.556 [2024-02-13 07:27:21.050129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.556 [2024-02-13 07:27:21.050246] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:47.816  Copying: 56/56 [kB] (average 54 MBps)[2024-02-13 07:27:21.403072] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:48.752 00:26:48.752 00:26:48.752 07:27:22 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:26:48.752 07:27:22 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:48.752 07:27:22 -- dd/common.sh@31 -- # xtrace_disable 00:26:48.752 07:27:22 -- common/autotest_common.sh@10 -- # set +x 00:26:49.011 [2024-02-13 07:27:22.506734] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:49.011 [2024-02-13 07:27:22.506938] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139443 ] 00:26:49.011 { 00:26:49.011 "subsystems": [ 00:26:49.011 { 00:26:49.011 "subsystem": "bdev", 00:26:49.011 "config": [ 00:26:49.011 { 00:26:49.011 "params": { 00:26:49.011 "trtype": "pcie", 00:26:49.011 "traddr": "0000:00:06.0", 00:26:49.011 "name": "Nvme0" 00:26:49.011 }, 00:26:49.011 "method": "bdev_nvme_attach_controller" 00:26:49.011 }, 00:26:49.011 { 00:26:49.011 "method": "bdev_wait_for_examine" 00:26:49.011 } 00:26:49.011 ] 00:26:49.011 } 00:26:49.011 ] 00:26:49.011 } 00:26:49.011 [2024-02-13 07:27:22.676505] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.270 [2024-02-13 07:27:22.857305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.270 [2024-02-13 07:27:22.857426] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:49.528  Copying: 56/56 [kB] (average 54 MBps)[2024-02-13 07:27:23.212601] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:50.464 00:26:50.464 00:26:50.723 07:27:24 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:50.723 07:27:24 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:26:50.723 07:27:24 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:50.723 07:27:24 -- dd/common.sh@11 -- # local nvme_ref= 00:26:50.723 07:27:24 -- dd/common.sh@12 -- # local size=57344 00:26:50.723 07:27:24 -- dd/common.sh@14 -- # local bs=1048576 00:26:50.723 07:27:24 -- dd/common.sh@15 -- # local count=1 00:26:50.723 07:27:24 -- dd/common.sh@18 -- # gen_conf 00:26:50.723 07:27:24 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:50.723 07:27:24 -- dd/common.sh@31 -- # xtrace_disable 00:26:50.723 07:27:24 -- common/autotest_common.sh@10 -- # set +x 00:26:50.723 { 00:26:50.723 "subsystems": [ 00:26:50.723 { 00:26:50.723 "subsystem": "bdev", 00:26:50.723 "config": [ 00:26:50.723 { 00:26:50.723 "params": { 00:26:50.723 "trtype": "pcie", 00:26:50.723 "traddr": "0000:00:06.0", 00:26:50.723 "name": "Nvme0" 00:26:50.723 }, 00:26:50.723 "method": "bdev_nvme_attach_controller" 00:26:50.723 }, 00:26:50.723 { 00:26:50.723 "method": "bdev_wait_for_examine" 00:26:50.723 } 00:26:50.723 ] 00:26:50.723 } 00:26:50.723 ] 00:26:50.723 } 00:26:50.723 [2024-02-13 07:27:24.230641] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:50.723 [2024-02-13 07:27:24.231105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139471 ] 00:26:50.723 [2024-02-13 07:27:24.396001] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.982 [2024-02-13 07:27:24.572941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.982 [2024-02-13 07:27:24.573087] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:51.241  Copying: 1024/1024 [kB] (average 1000 MBps)[2024-02-13 07:27:24.929485] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:52.617 00:26:52.617 00:26:52.617 07:27:25 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:26:52.617 07:27:25 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:52.617 07:27:25 -- dd/basic_rw.sh@23 -- # count=3 00:26:52.617 07:27:25 -- dd/basic_rw.sh@24 -- # count=3 00:26:52.617 07:27:25 -- dd/basic_rw.sh@25 -- # size=49152 00:26:52.617 07:27:25 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:26:52.617 07:27:25 -- dd/common.sh@98 -- # xtrace_disable 00:26:52.617 07:27:25 -- common/autotest_common.sh@10 -- # set +x 00:26:52.876 07:27:26 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:26:52.876 07:27:26 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:52.876 07:27:26 -- dd/common.sh@31 -- # xtrace_disable 00:26:52.876 07:27:26 -- common/autotest_common.sh@10 -- # set +x 00:26:52.876 [2024-02-13 07:27:26.421029] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:52.876 [2024-02-13 07:27:26.421200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139521 ] 00:26:52.876 { 00:26:52.876 "subsystems": [ 00:26:52.876 { 00:26:52.876 "subsystem": "bdev", 00:26:52.876 "config": [ 00:26:52.876 { 00:26:52.876 "params": { 00:26:52.876 "trtype": "pcie", 00:26:52.876 "traddr": "0000:00:06.0", 00:26:52.876 "name": "Nvme0" 00:26:52.876 }, 00:26:52.876 "method": "bdev_nvme_attach_controller" 00:26:52.876 }, 00:26:52.876 { 00:26:52.876 "method": "bdev_wait_for_examine" 00:26:52.876 } 00:26:52.876 ] 00:26:52.876 } 00:26:52.876 ] 00:26:52.876 } 00:26:53.135 [2024-02-13 07:27:26.576100] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.135 [2024-02-13 07:27:26.753169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:53.135 [2024-02-13 07:27:26.753295] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:53.702  Copying: 48/48 [kB] (average 46 MBps)[2024-02-13 07:27:27.106222] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:54.639 00:26:54.639 00:26:54.639 07:27:28 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:26:54.639 07:27:28 -- dd/basic_rw.sh@37 -- # gen_conf 00:26:54.639 07:27:28 -- dd/common.sh@31 -- # xtrace_disable 00:26:54.639 07:27:28 -- common/autotest_common.sh@10 -- # set +x 00:26:54.639 { 00:26:54.639 "subsystems": [ 00:26:54.639 { 00:26:54.639 "subsystem": "bdev", 00:26:54.639 "config": [ 00:26:54.639 { 00:26:54.639 "params": { 00:26:54.639 "trtype": "pcie", 00:26:54.639 "traddr": "0000:00:06.0", 00:26:54.639 "name": "Nvme0" 00:26:54.639 }, 00:26:54.639 "method": "bdev_nvme_attach_controller" 00:26:54.639 }, 00:26:54.639 { 00:26:54.639 "method": "bdev_wait_for_examine" 00:26:54.639 } 00:26:54.639 ] 00:26:54.639 } 00:26:54.639 ] 00:26:54.639 } 00:26:54.639 [2024-02-13 07:27:28.133389] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:54.639 [2024-02-13 07:27:28.133601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139548 ] 00:26:54.639 [2024-02-13 07:27:28.301328] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.897 [2024-02-13 07:27:28.483789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.898 [2024-02-13 07:27:28.483915] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:55.156  Copying: 48/48 [kB] (average 46 MBps)[2024-02-13 07:27:28.837772] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:56.532 00:26:56.532 00:26:56.532 07:27:29 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:26:56.532 07:27:29 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:26:56.532 07:27:29 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:26:56.532 07:27:29 -- dd/common.sh@11 -- # local nvme_ref= 00:26:56.532 07:27:29 -- dd/common.sh@12 -- # local size=49152 00:26:56.532 07:27:29 -- dd/common.sh@14 -- # local bs=1048576 00:26:56.532 07:27:29 -- dd/common.sh@15 -- # local count=1 00:26:56.532 07:27:29 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:26:56.532 07:27:29 -- dd/common.sh@18 -- # gen_conf 00:26:56.532 07:27:29 -- dd/common.sh@31 -- # xtrace_disable 00:26:56.532 07:27:29 -- common/autotest_common.sh@10 -- # set +x 00:26:56.532 { 00:26:56.532 "subsystems": [ 00:26:56.532 { 00:26:56.532 "subsystem": "bdev", 00:26:56.532 "config": [ 00:26:56.532 { 00:26:56.532 "params": { 00:26:56.532 "trtype": "pcie", 00:26:56.532 "traddr": "0000:00:06.0", 00:26:56.532 "name": "Nvme0" 00:26:56.532 }, 00:26:56.532 "method": "bdev_nvme_attach_controller" 00:26:56.532 }, 00:26:56.532 { 00:26:56.532 "method": "bdev_wait_for_examine" 00:26:56.532 } 00:26:56.532 ] 00:26:56.532 } 00:26:56.532 ] 00:26:56.532 } 00:26:56.532 [2024-02-13 07:27:29.943218] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:56.532 [2024-02-13 07:27:29.943444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139574 ] 00:26:56.532 [2024-02-13 07:27:30.109335] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.804 [2024-02-13 07:27:30.287975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.804 [2024-02-13 07:27:30.288411] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:57.081  Copying: 1024/1024 [kB] (average 1000 MBps)[2024-02-13 07:27:30.644658] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:26:58.017 00:26:58.017 00:26:58.017 07:27:31 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:26:58.017 07:27:31 -- dd/basic_rw.sh@23 -- # count=3 00:26:58.017 07:27:31 -- dd/basic_rw.sh@24 -- # count=3 00:26:58.017 07:27:31 -- dd/basic_rw.sh@25 -- # size=49152 00:26:58.017 07:27:31 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:26:58.017 07:27:31 -- dd/common.sh@98 -- # xtrace_disable 00:26:58.017 07:27:31 -- common/autotest_common.sh@10 -- # set +x 00:26:58.586 07:27:31 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:26:58.586 07:27:31 -- dd/basic_rw.sh@30 -- # gen_conf 00:26:58.586 07:27:31 -- dd/common.sh@31 -- # xtrace_disable 00:26:58.586 07:27:31 -- common/autotest_common.sh@10 -- # set +x 00:26:58.586 { 00:26:58.586 "subsystems": [ 00:26:58.586 { 00:26:58.586 "subsystem": "bdev", 00:26:58.586 "config": [ 00:26:58.586 { 00:26:58.586 "params": { 00:26:58.586 "trtype": "pcie", 00:26:58.586 "traddr": "0000:00:06.0", 00:26:58.586 "name": "Nvme0" 00:26:58.586 }, 00:26:58.586 "method": "bdev_nvme_attach_controller" 00:26:58.586 }, 00:26:58.586 { 00:26:58.586 "method": "bdev_wait_for_examine" 00:26:58.587 } 00:26:58.587 ] 00:26:58.587 } 00:26:58.587 ] 00:26:58.587 } 00:26:58.587 [2024-02-13 07:27:32.059992] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:26:58.587 [2024-02-13 07:27:32.060227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139601 ] 00:26:58.587 [2024-02-13 07:27:32.228843] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.845 [2024-02-13 07:27:32.416068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.845 [2024-02-13 07:27:32.416224] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:26:59.104  Copying: 48/48 [kB] (average 46 MBps)[2024-02-13 07:27:32.769466] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:27:00.480 00:27:00.480 00:27:00.480 07:27:33 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:27:00.480 07:27:33 -- dd/basic_rw.sh@37 -- # gen_conf 00:27:00.480 07:27:33 -- dd/common.sh@31 -- # xtrace_disable 00:27:00.480 07:27:33 -- common/autotest_common.sh@10 -- # set +x 00:27:00.480 [2024-02-13 07:27:33.870458] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:00.480 [2024-02-13 07:27:33.870701] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139633 ] 00:27:00.480 { 00:27:00.480 "subsystems": [ 00:27:00.480 { 00:27:00.480 "subsystem": "bdev", 00:27:00.480 "config": [ 00:27:00.480 { 00:27:00.480 "params": { 00:27:00.480 "trtype": "pcie", 00:27:00.480 "traddr": "0000:00:06.0", 00:27:00.480 "name": "Nvme0" 00:27:00.480 }, 00:27:00.480 "method": "bdev_nvme_attach_controller" 00:27:00.480 }, 00:27:00.480 { 00:27:00.480 "method": "bdev_wait_for_examine" 00:27:00.480 } 00:27:00.480 ] 00:27:00.480 } 00:27:00.480 ] 00:27:00.480 } 00:27:00.480 [2024-02-13 07:27:34.021854] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.739 [2024-02-13 07:27:34.210754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.739 [2024-02-13 07:27:34.210896] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:27:00.998  Copying: 48/48 [kB] (average 46 MBps)[2024-02-13 07:27:34.564523] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:27:01.935 00:27:01.935 00:27:01.935 07:27:35 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:01.935 07:27:35 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:27:01.935 07:27:35 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:01.935 07:27:35 -- dd/common.sh@11 -- # local nvme_ref= 00:27:01.935 07:27:35 -- dd/common.sh@12 -- # local size=49152 00:27:01.935 07:27:35 -- dd/common.sh@14 -- # local bs=1048576 00:27:01.935 07:27:35 -- dd/common.sh@15 -- # local count=1 00:27:01.935 07:27:35 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:01.935 07:27:35 -- dd/common.sh@18 -- # gen_conf 00:27:01.935 07:27:35 -- dd/common.sh@31 -- # xtrace_disable 00:27:01.935 07:27:35 -- common/autotest_common.sh@10 -- # set +x 00:27:01.935 [2024-02-13 07:27:35.586973] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:01.935 [2024-02-13 07:27:35.588060] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139661 ] 00:27:01.935 { 00:27:01.935 "subsystems": [ 00:27:01.935 { 00:27:01.935 "subsystem": "bdev", 00:27:01.935 "config": [ 00:27:01.935 { 00:27:01.935 "params": { 00:27:01.935 "trtype": "pcie", 00:27:01.935 "traddr": "0000:00:06.0", 00:27:01.935 "name": "Nvme0" 00:27:01.935 }, 00:27:01.935 "method": "bdev_nvme_attach_controller" 00:27:01.935 }, 00:27:01.935 { 00:27:01.935 "method": "bdev_wait_for_examine" 00:27:01.935 } 00:27:01.935 ] 00:27:01.935 } 00:27:01.935 ] 00:27:01.935 } 00:27:02.193 [2024-02-13 07:27:35.744023] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:02.452 [2024-02-13 07:27:35.934718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.452 [2024-02-13 07:27:35.934884] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:27:02.711  Copying: 1024/1024 [kB] (average 1000 MBps)[2024-02-13 07:27:36.292661] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:27:03.647 00:27:03.647 00:27:03.906 ************************************ 00:27:03.906 END TEST dd_rw 00:27:03.906 ************************************ 00:27:03.906 00:27:03.906 real 0m34.774s 00:27:03.906 user 0m28.486s 00:27:03.906 sys 0m5.046s 00:27:03.906 07:27:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:03.906 07:27:37 -- common/autotest_common.sh@10 -- # set +x 00:27:03.906 07:27:37 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:27:03.906 07:27:37 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:27:03.906 07:27:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:27:03.906 07:27:37 -- common/autotest_common.sh@10 -- # set +x 00:27:03.906 ************************************ 00:27:03.906 START TEST dd_rw_offset 00:27:03.906 ************************************ 00:27:03.906 07:27:37 -- common/autotest_common.sh@1102 -- # basic_offset 00:27:03.906 07:27:37 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:27:03.906 07:27:37 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:27:03.906 07:27:37 -- dd/common.sh@98 -- # xtrace_disable 00:27:03.906 07:27:37 -- common/autotest_common.sh@10 -- # set +x 00:27:03.906 07:27:37 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:27:03.906 07:27:37 -- dd/basic_rw.sh@56 -- # data=sg4cqgyekw4mjkvehyuipepfd6ybnhct4lrrtgdapc1zinp5rmsfgrq0m6bqxcilujy9hdbbliz1lr3daqtpbgj19cr9cjelhqeiedm713984b2eoxcaqh96edyibhv9we9zp8gf24r7k6cnxtexc15m235by09i6wunfv8z7cnxau0x44ee5xdgwowiw9nmh2y6cwwry5pfjx9bs2ksk6gj20teux2bj3d1pmoyzadqse4cmtcocqpg714uiyanchh719297z4scop9bfbo49sperewanmz22ff5pt73ypvencv856mv0qh2wiycwycuvo2evbpt4xmx41rj8qc76gba2p9kx4y8cpp7nbc60lomuoru1yyq4uy4fzd593o6us1d95j1ro1mh4sub5f02r03fuxj5mdh1ku2xdrq9milz7w08kr2wk4n13o6xva46e93ev8h2v8kltzbjw6gqv868tkj8kfa22m8mxdrtdbmdldr5kohiw96pwtt5lckdkv8emliudg8voxia3qc7qjbv1tzmzu1e8is9hiyszdl573wrttxxua7clbm92btah3zqgp8txbfv7orzmwp5yvgnifeamqmrez4ncxworfw6b0j0n6lp27xbfuom3ivnnp7hp1v5cr43rzzz3q4c78dp0e6oly5kgpwj356xs69kwdl3bq5rfwwcse80dckl922w9i7mrjnymmq2sz6dg8min8ju0d61wgh24b60lousiqsjds65zql0p2stiguza7bjswmq1qe7wmlwpbcma56kzbguf4u77qq7m5zqh6ermn66xzqzv37e3yi400gi8ixyu0t7d6tf1jwmtldep6gy6ywm0q95d9au7i5oqbph4z26farmgwevbg7asdcl9msokfbh4viiqwi5xfcujh9hft0w9i4k8d3shl3ymq7v9w3sayoa1p29m7hzh9c0t3cpti41ke68uq9pe6ag457rm5lxxvmvw5bloc0lov7yvsr5yvfmkmrtjlawsnc60dg4rgezgv9rtdctifjv2sq1quoeroelcdzye53ftixokpzp3jvis9axkgphahgfjug4pmpw348ru5t8rraj3zz51oz6vbljqjibbn016fn6ivljs2hfyxci1i1ejb3zkzzvybgjrd4om06raltt59l4a72uyvbhxyhbq7b7oq27cgbmo8v8zp90umkb8lwr6ofnomz7dt8byuqyqrzfm8x9lu8jkkti0yq8r8u1llajmrhbtg5ikuq9w51iqoz4b8plbvgqz7hngw3jjypw5e1dy68gkuo5sgaqv9cktg3babojbyj5ah5aybbruiq14b4kvmusqid7jbduwi01856ic4c8swt1c85ex8ypznvjynr4ec7s9lxket73hoedb6yno6mwpwz8uyimmb0j2tcjoet1g1mqgtjackmn02pc74mcpgyc4h07z8h8g9t0h0xhnyevgkn3prbgz88mo8jwavq8s0oepr85oaclrpxqjkn371no431dkvlzdm0hrwqex470jyd89odzknt5yj172ht9ccflmh975j0v6084ji05hotur23a4drm1fwywq4kyg15y57pqphar9cmjatffc4lr6arop1r2pw4ct41unxd16kk5slyaapphd92oii29vcnkehc21rujdo646z33qjnsagprdzutiozompiwghxjtnl4wulop3ocmp58udn8ozl6entkz5g0f9xdoapkl27gsdfaqzcj5i0bmf0yj3qaj199h4o8kcuwr9jpbrfadqrnzclh2tiyp9jtpkblpvcz1xwbpzvwyol1vozvcvnshbjon1qfxq8thhwud2v6eq5sm99v0x747rfik2dozx8asgh2flytia0eh2zxvlfbn0nqj7wsta2kv35gvs4obijqr4lg12yoqfvy6oe1a059jff3vjoprtcoc8c7wka7iav5vsjbgpqy0l1o18e7ipqwl7yl6xjzkewppizm02c4rkonnd866oz41mhihh0djrnkf9q5z0kcvbrknicqim6qk8u5guq2mm1yechvy3ceowf6wu3qfudt2w96ngsuyjj4fzhe53zuvwi2cv012lo6f2ju3okqyz875mry8blu4imeeai425sks2wrwpn0seyhqkf4wctujt8klea9oynskopfpcntqabttb2xkce2jnknq0foxny4be5sdehffphvrfm2pj6uj4lqyslk0bi9ixbp6m1tupj2h0ddf6m6icq39ez4py5a1g65gizz68di3irmjyih4i5izczgk0bo5tak6xxu8e5jtbcprjb8yuextf3cnqv6lpyushqkuim1g72ns5qu67211nbsk4wwzgc0t0odq035vcyge2z9rsvyjcf4h00v2otoawcb6s6syt6g283vtdkjb5efhtsrnddoedeb0pv9uv8ypy83lkniraq8xz39bygx28o059fi6va0z02vjrnpvstcrksu98j0npvao8s8lfmwqh0be8k8cdk06w97oxzn58wgej1qv8oigf33w8b53r2jp6jqjtqymcbpc6x2vwxoin88m7fygq0bzne2uysytzyi44ngkzvclu6r1h8jujenq1dm1w7yumrpl3q09detr6b3ilxcny2jkrelix557uv29m0a6hu5a59f12mqtn7euoe32yycz98zchwdln26l7pncxrp2yodxy6npflz1dyo0fdm3ppo3opijlhsw8a8cwad96opn3dygezpvfm5xv4qo65libowxpllz4tnpqct01o5u37ljad6kccthjd9l3ka3nubhqrke4bcbpvrjusie3qdai3ojg0wk62ezcaqpk10v6007002b3rtudiqdy3p28pkqm6wv43jondukc9g1a2elicm4h0g7ifnvv84txuauf56sgd09qioybrt3wib1l1bvftfv6mfwmqkxtvq86hze8kpq8hwyi3lzq4lkdxc6cr27qlr4h73n34oydkkas3172lhv4xu3oe0d91jz3wddusdxnoe6h840q7gk51nur088hiw8qiujwkaw1ctxdd4csfh43v5t0ufub99v5wply9r4yscyrykv0jihoh8aer5oqppseqvn72warr5ofw7kiisy08mosic2igl35ulsa58p2qr11bmv1muxu96x4nyb7v4cvsbxl9bcnzsvq4ovykxkgjkhedxftdu6t60947dnm622o8o099yx9r27rcq1de9katvws1m9oy5s8wsy4lmyxeo7wjex8giptaoasd1akptf3pg13ydq4831rv2ydb3kzqqszg5dlxdn1fohip82d8xd3e5n543y20tfrywyl2x7vf9ibo35b4nfja07r0cx3qknsc7a7bt4575c6i1cakjwrhec6pv7v1t2bgd4zwx3vk10825bkdpzildkfmduh46uoatnnufmu36hzvw9dm8to8nvy7z3r4cadoc3cu6zx1ichlp8yshdw0n52kejbgt3cmo0witolcq0luokzta5q9x2iwny72tln81mma3k434c6oibe3j0ui0fu0a7xbp5913r5d2pvqdsmeg07ox50s2zi6i4xoqigptn7x43it9qfx2rbm5bq9trsjd4uyn95pmkees7l8xccgfioq6xamir3ownd3ia8fuy0vbryk6v2gqidzhvy6xcs78ledv4yacpert48c6l0h6bv4s4c7z7wj5k3drpsjq0pdf7mlnuj1tahqcozjnh91fxqog9ddjzu4kdw3oz7jpgum6tcw61k3pjmkxtzulz5wqwny8wkfrce3anfl42ygpranspb8ug1i11kf3t9n0l3z8exz97t04ukbpu82lxpqvkewoj565q674cj3htdxtbxpve4v5lpnlcr3h3019cfpta367qolqsnie5u4olwqenkor8fbxixiyjcenyq4af9vq3xtagzfubr9pbburfwe6iwpamvkx76w06xwbx6ek7rgqxjgynzb9e43af86qachkspxoeo0oqis19pxigjatroul0zfh4m5qqlpr772rmddhlj08xb0z4imv6nam3 00:27:03.906 07:27:37 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:27:03.906 07:27:37 -- dd/basic_rw.sh@59 -- # gen_conf 00:27:03.906 07:27:37 -- dd/common.sh@31 -- # xtrace_disable 00:27:03.906 07:27:37 -- common/autotest_common.sh@10 -- # set +x 00:27:03.906 { 00:27:03.906 "subsystems": [ 00:27:03.906 { 00:27:03.906 "subsystem": "bdev", 00:27:03.906 "config": [ 00:27:03.906 { 00:27:03.906 "params": { 00:27:03.906 "trtype": "pcie", 00:27:03.906 "traddr": "0000:00:06.0", 00:27:03.906 "name": "Nvme0" 00:27:03.906 }, 00:27:03.906 "method": "bdev_nvme_attach_controller" 00:27:03.906 }, 00:27:03.906 { 00:27:03.906 "method": "bdev_wait_for_examine" 00:27:03.906 } 00:27:03.906 ] 00:27:03.906 } 00:27:03.906 ] 00:27:03.906 } 00:27:03.906 [2024-02-13 07:27:37.515287] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:03.906 [2024-02-13 07:27:37.515624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139734 ] 00:27:04.165 [2024-02-13 07:27:37.681239] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.424 [2024-02-13 07:27:37.865577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.424 [2024-02-13 07:27:37.865731] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:27:04.683  Copying: 4096/4096 [B] (average 4000 kBps)[2024-02-13 07:27:38.241567] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:27:05.617 00:27:05.617 00:27:05.617 07:27:39 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:27:05.617 07:27:39 -- dd/basic_rw.sh@65 -- # gen_conf 00:27:05.617 07:27:39 -- dd/common.sh@31 -- # xtrace_disable 00:27:05.617 07:27:39 -- common/autotest_common.sh@10 -- # set +x 00:27:05.617 { 00:27:05.617 "subsystems": [ 00:27:05.617 { 00:27:05.617 "subsystem": "bdev", 00:27:05.617 "config": [ 00:27:05.617 { 00:27:05.617 "params": { 00:27:05.617 "trtype": "pcie", 00:27:05.617 "traddr": "0000:00:06.0", 00:27:05.617 "name": "Nvme0" 00:27:05.617 }, 00:27:05.617 "method": "bdev_nvme_attach_controller" 00:27:05.617 }, 00:27:05.617 { 00:27:05.617 "method": "bdev_wait_for_examine" 00:27:05.617 } 00:27:05.617 ] 00:27:05.617 } 00:27:05.617 ] 00:27:05.617 } 00:27:05.617 [2024-02-13 07:27:39.273778] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:05.617 [2024-02-13 07:27:39.273966] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139758 ] 00:27:05.874 [2024-02-13 07:27:39.438306] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.132 [2024-02-13 07:27:39.621705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.132 [2024-02-13 07:27:39.621852] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:27:06.391  Copying: 4096/4096 [B] (average 4000 kBps)[2024-02-13 07:27:39.981173] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:27:07.325 00:27:07.325 00:27:07.584 07:27:41 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:27:07.584 ************************************ 00:27:07.584 END TEST dd_rw_offset 00:27:07.584 ************************************ 00:27:07.585 07:27:41 -- dd/basic_rw.sh@72 -- # [[ sg4cqgyekw4mjkvehyuipepfd6ybnhct4lrrtgdapc1zinp5rmsfgrq0m6bqxcilujy9hdbbliz1lr3daqtpbgj19cr9cjelhqeiedm713984b2eoxcaqh96edyibhv9we9zp8gf24r7k6cnxtexc15m235by09i6wunfv8z7cnxau0x44ee5xdgwowiw9nmh2y6cwwry5pfjx9bs2ksk6gj20teux2bj3d1pmoyzadqse4cmtcocqpg714uiyanchh719297z4scop9bfbo49sperewanmz22ff5pt73ypvencv856mv0qh2wiycwycuvo2evbpt4xmx41rj8qc76gba2p9kx4y8cpp7nbc60lomuoru1yyq4uy4fzd593o6us1d95j1ro1mh4sub5f02r03fuxj5mdh1ku2xdrq9milz7w08kr2wk4n13o6xva46e93ev8h2v8kltzbjw6gqv868tkj8kfa22m8mxdrtdbmdldr5kohiw96pwtt5lckdkv8emliudg8voxia3qc7qjbv1tzmzu1e8is9hiyszdl573wrttxxua7clbm92btah3zqgp8txbfv7orzmwp5yvgnifeamqmrez4ncxworfw6b0j0n6lp27xbfuom3ivnnp7hp1v5cr43rzzz3q4c78dp0e6oly5kgpwj356xs69kwdl3bq5rfwwcse80dckl922w9i7mrjnymmq2sz6dg8min8ju0d61wgh24b60lousiqsjds65zql0p2stiguza7bjswmq1qe7wmlwpbcma56kzbguf4u77qq7m5zqh6ermn66xzqzv37e3yi400gi8ixyu0t7d6tf1jwmtldep6gy6ywm0q95d9au7i5oqbph4z26farmgwevbg7asdcl9msokfbh4viiqwi5xfcujh9hft0w9i4k8d3shl3ymq7v9w3sayoa1p29m7hzh9c0t3cpti41ke68uq9pe6ag457rm5lxxvmvw5bloc0lov7yvsr5yvfmkmrtjlawsnc60dg4rgezgv9rtdctifjv2sq1quoeroelcdzye53ftixokpzp3jvis9axkgphahgfjug4pmpw348ru5t8rraj3zz51oz6vbljqjibbn016fn6ivljs2hfyxci1i1ejb3zkzzvybgjrd4om06raltt59l4a72uyvbhxyhbq7b7oq27cgbmo8v8zp90umkb8lwr6ofnomz7dt8byuqyqrzfm8x9lu8jkkti0yq8r8u1llajmrhbtg5ikuq9w51iqoz4b8plbvgqz7hngw3jjypw5e1dy68gkuo5sgaqv9cktg3babojbyj5ah5aybbruiq14b4kvmusqid7jbduwi01856ic4c8swt1c85ex8ypznvjynr4ec7s9lxket73hoedb6yno6mwpwz8uyimmb0j2tcjoet1g1mqgtjackmn02pc74mcpgyc4h07z8h8g9t0h0xhnyevgkn3prbgz88mo8jwavq8s0oepr85oaclrpxqjkn371no431dkvlzdm0hrwqex470jyd89odzknt5yj172ht9ccflmh975j0v6084ji05hotur23a4drm1fwywq4kyg15y57pqphar9cmjatffc4lr6arop1r2pw4ct41unxd16kk5slyaapphd92oii29vcnkehc21rujdo646z33qjnsagprdzutiozompiwghxjtnl4wulop3ocmp58udn8ozl6entkz5g0f9xdoapkl27gsdfaqzcj5i0bmf0yj3qaj199h4o8kcuwr9jpbrfadqrnzclh2tiyp9jtpkblpvcz1xwbpzvwyol1vozvcvnshbjon1qfxq8thhwud2v6eq5sm99v0x747rfik2dozx8asgh2flytia0eh2zxvlfbn0nqj7wsta2kv35gvs4obijqr4lg12yoqfvy6oe1a059jff3vjoprtcoc8c7wka7iav5vsjbgpqy0l1o18e7ipqwl7yl6xjzkewppizm02c4rkonnd866oz41mhihh0djrnkf9q5z0kcvbrknicqim6qk8u5guq2mm1yechvy3ceowf6wu3qfudt2w96ngsuyjj4fzhe53zuvwi2cv012lo6f2ju3okqyz875mry8blu4imeeai425sks2wrwpn0seyhqkf4wctujt8klea9oynskopfpcntqabttb2xkce2jnknq0foxny4be5sdehffphvrfm2pj6uj4lqyslk0bi9ixbp6m1tupj2h0ddf6m6icq39ez4py5a1g65gizz68di3irmjyih4i5izczgk0bo5tak6xxu8e5jtbcprjb8yuextf3cnqv6lpyushqkuim1g72ns5qu67211nbsk4wwzgc0t0odq035vcyge2z9rsvyjcf4h00v2otoawcb6s6syt6g283vtdkjb5efhtsrnddoedeb0pv9uv8ypy83lkniraq8xz39bygx28o059fi6va0z02vjrnpvstcrksu98j0npvao8s8lfmwqh0be8k8cdk06w97oxzn58wgej1qv8oigf33w8b53r2jp6jqjtqymcbpc6x2vwxoin88m7fygq0bzne2uysytzyi44ngkzvclu6r1h8jujenq1dm1w7yumrpl3q09detr6b3ilxcny2jkrelix557uv29m0a6hu5a59f12mqtn7euoe32yycz98zchwdln26l7pncxrp2yodxy6npflz1dyo0fdm3ppo3opijlhsw8a8cwad96opn3dygezpvfm5xv4qo65libowxpllz4tnpqct01o5u37ljad6kccthjd9l3ka3nubhqrke4bcbpvrjusie3qdai3ojg0wk62ezcaqpk10v6007002b3rtudiqdy3p28pkqm6wv43jondukc9g1a2elicm4h0g7ifnvv84txuauf56sgd09qioybrt3wib1l1bvftfv6mfwmqkxtvq86hze8kpq8hwyi3lzq4lkdxc6cr27qlr4h73n34oydkkas3172lhv4xu3oe0d91jz3wddusdxnoe6h840q7gk51nur088hiw8qiujwkaw1ctxdd4csfh43v5t0ufub99v5wply9r4yscyrykv0jihoh8aer5oqppseqvn72warr5ofw7kiisy08mosic2igl35ulsa58p2qr11bmv1muxu96x4nyb7v4cvsbxl9bcnzsvq4ovykxkgjkhedxftdu6t60947dnm622o8o099yx9r27rcq1de9katvws1m9oy5s8wsy4lmyxeo7wjex8giptaoasd1akptf3pg13ydq4831rv2ydb3kzqqszg5dlxdn1fohip82d8xd3e5n543y20tfrywyl2x7vf9ibo35b4nfja07r0cx3qknsc7a7bt4575c6i1cakjwrhec6pv7v1t2bgd4zwx3vk10825bkdpzildkfmduh46uoatnnufmu36hzvw9dm8to8nvy7z3r4cadoc3cu6zx1ichlp8yshdw0n52kejbgt3cmo0witolcq0luokzta5q9x2iwny72tln81mma3k434c6oibe3j0ui0fu0a7xbp5913r5d2pvqdsmeg07ox50s2zi6i4xoqigptn7x43it9qfx2rbm5bq9trsjd4uyn95pmkees7l8xccgfioq6xamir3ownd3ia8fuy0vbryk6v2gqidzhvy6xcs78ledv4yacpert48c6l0h6bv4s4c7z7wj5k3drpsjq0pdf7mlnuj1tahqcozjnh91fxqog9ddjzu4kdw3oz7jpgum6tcw61k3pjmkxtzulz5wqwny8wkfrce3anfl42ygpranspb8ug1i11kf3t9n0l3z8exz97t04ukbpu82lxpqvkewoj565q674cj3htdxtbxpve4v5lpnlcr3h3019cfpta367qolqsnie5u4olwqenkor8fbxixiyjcenyq4af9vq3xtagzfubr9pbburfwe6iwpamvkx76w06xwbx6ek7rgqxjgynzb9e43af86qachkspxoeo0oqis19pxigjatroul0zfh4m5qqlpr772rmddhlj08xb0z4imv6nam3 == \s\g\4\c\q\g\y\e\k\w\4\m\j\k\v\e\h\y\u\i\p\e\p\f\d\6\y\b\n\h\c\t\4\l\r\r\t\g\d\a\p\c\1\z\i\n\p\5\r\m\s\f\g\r\q\0\m\6\b\q\x\c\i\l\u\j\y\9\h\d\b\b\l\i\z\1\l\r\3\d\a\q\t\p\b\g\j\1\9\c\r\9\c\j\e\l\h\q\e\i\e\d\m\7\1\3\9\8\4\b\2\e\o\x\c\a\q\h\9\6\e\d\y\i\b\h\v\9\w\e\9\z\p\8\g\f\2\4\r\7\k\6\c\n\x\t\e\x\c\1\5\m\2\3\5\b\y\0\9\i\6\w\u\n\f\v\8\z\7\c\n\x\a\u\0\x\4\4\e\e\5\x\d\g\w\o\w\i\w\9\n\m\h\2\y\6\c\w\w\r\y\5\p\f\j\x\9\b\s\2\k\s\k\6\g\j\2\0\t\e\u\x\2\b\j\3\d\1\p\m\o\y\z\a\d\q\s\e\4\c\m\t\c\o\c\q\p\g\7\1\4\u\i\y\a\n\c\h\h\7\1\9\2\9\7\z\4\s\c\o\p\9\b\f\b\o\4\9\s\p\e\r\e\w\a\n\m\z\2\2\f\f\5\p\t\7\3\y\p\v\e\n\c\v\8\5\6\m\v\0\q\h\2\w\i\y\c\w\y\c\u\v\o\2\e\v\b\p\t\4\x\m\x\4\1\r\j\8\q\c\7\6\g\b\a\2\p\9\k\x\4\y\8\c\p\p\7\n\b\c\6\0\l\o\m\u\o\r\u\1\y\y\q\4\u\y\4\f\z\d\5\9\3\o\6\u\s\1\d\9\5\j\1\r\o\1\m\h\4\s\u\b\5\f\0\2\r\0\3\f\u\x\j\5\m\d\h\1\k\u\2\x\d\r\q\9\m\i\l\z\7\w\0\8\k\r\2\w\k\4\n\1\3\o\6\x\v\a\4\6\e\9\3\e\v\8\h\2\v\8\k\l\t\z\b\j\w\6\g\q\v\8\6\8\t\k\j\8\k\f\a\2\2\m\8\m\x\d\r\t\d\b\m\d\l\d\r\5\k\o\h\i\w\9\6\p\w\t\t\5\l\c\k\d\k\v\8\e\m\l\i\u\d\g\8\v\o\x\i\a\3\q\c\7\q\j\b\v\1\t\z\m\z\u\1\e\8\i\s\9\h\i\y\s\z\d\l\5\7\3\w\r\t\t\x\x\u\a\7\c\l\b\m\9\2\b\t\a\h\3\z\q\g\p\8\t\x\b\f\v\7\o\r\z\m\w\p\5\y\v\g\n\i\f\e\a\m\q\m\r\e\z\4\n\c\x\w\o\r\f\w\6\b\0\j\0\n\6\l\p\2\7\x\b\f\u\o\m\3\i\v\n\n\p\7\h\p\1\v\5\c\r\4\3\r\z\z\z\3\q\4\c\7\8\d\p\0\e\6\o\l\y\5\k\g\p\w\j\3\5\6\x\s\6\9\k\w\d\l\3\b\q\5\r\f\w\w\c\s\e\8\0\d\c\k\l\9\2\2\w\9\i\7\m\r\j\n\y\m\m\q\2\s\z\6\d\g\8\m\i\n\8\j\u\0\d\6\1\w\g\h\2\4\b\6\0\l\o\u\s\i\q\s\j\d\s\6\5\z\q\l\0\p\2\s\t\i\g\u\z\a\7\b\j\s\w\m\q\1\q\e\7\w\m\l\w\p\b\c\m\a\5\6\k\z\b\g\u\f\4\u\7\7\q\q\7\m\5\z\q\h\6\e\r\m\n\6\6\x\z\q\z\v\3\7\e\3\y\i\4\0\0\g\i\8\i\x\y\u\0\t\7\d\6\t\f\1\j\w\m\t\l\d\e\p\6\g\y\6\y\w\m\0\q\9\5\d\9\a\u\7\i\5\o\q\b\p\h\4\z\2\6\f\a\r\m\g\w\e\v\b\g\7\a\s\d\c\l\9\m\s\o\k\f\b\h\4\v\i\i\q\w\i\5\x\f\c\u\j\h\9\h\f\t\0\w\9\i\4\k\8\d\3\s\h\l\3\y\m\q\7\v\9\w\3\s\a\y\o\a\1\p\2\9\m\7\h\z\h\9\c\0\t\3\c\p\t\i\4\1\k\e\6\8\u\q\9\p\e\6\a\g\4\5\7\r\m\5\l\x\x\v\m\v\w\5\b\l\o\c\0\l\o\v\7\y\v\s\r\5\y\v\f\m\k\m\r\t\j\l\a\w\s\n\c\6\0\d\g\4\r\g\e\z\g\v\9\r\t\d\c\t\i\f\j\v\2\s\q\1\q\u\o\e\r\o\e\l\c\d\z\y\e\5\3\f\t\i\x\o\k\p\z\p\3\j\v\i\s\9\a\x\k\g\p\h\a\h\g\f\j\u\g\4\p\m\p\w\3\4\8\r\u\5\t\8\r\r\a\j\3\z\z\5\1\o\z\6\v\b\l\j\q\j\i\b\b\n\0\1\6\f\n\6\i\v\l\j\s\2\h\f\y\x\c\i\1\i\1\e\j\b\3\z\k\z\z\v\y\b\g\j\r\d\4\o\m\0\6\r\a\l\t\t\5\9\l\4\a\7\2\u\y\v\b\h\x\y\h\b\q\7\b\7\o\q\2\7\c\g\b\m\o\8\v\8\z\p\9\0\u\m\k\b\8\l\w\r\6\o\f\n\o\m\z\7\d\t\8\b\y\u\q\y\q\r\z\f\m\8\x\9\l\u\8\j\k\k\t\i\0\y\q\8\r\8\u\1\l\l\a\j\m\r\h\b\t\g\5\i\k\u\q\9\w\5\1\i\q\o\z\4\b\8\p\l\b\v\g\q\z\7\h\n\g\w\3\j\j\y\p\w\5\e\1\d\y\6\8\g\k\u\o\5\s\g\a\q\v\9\c\k\t\g\3\b\a\b\o\j\b\y\j\5\a\h\5\a\y\b\b\r\u\i\q\1\4\b\4\k\v\m\u\s\q\i\d\7\j\b\d\u\w\i\0\1\8\5\6\i\c\4\c\8\s\w\t\1\c\8\5\e\x\8\y\p\z\n\v\j\y\n\r\4\e\c\7\s\9\l\x\k\e\t\7\3\h\o\e\d\b\6\y\n\o\6\m\w\p\w\z\8\u\y\i\m\m\b\0\j\2\t\c\j\o\e\t\1\g\1\m\q\g\t\j\a\c\k\m\n\0\2\p\c\7\4\m\c\p\g\y\c\4\h\0\7\z\8\h\8\g\9\t\0\h\0\x\h\n\y\e\v\g\k\n\3\p\r\b\g\z\8\8\m\o\8\j\w\a\v\q\8\s\0\o\e\p\r\8\5\o\a\c\l\r\p\x\q\j\k\n\3\7\1\n\o\4\3\1\d\k\v\l\z\d\m\0\h\r\w\q\e\x\4\7\0\j\y\d\8\9\o\d\z\k\n\t\5\y\j\1\7\2\h\t\9\c\c\f\l\m\h\9\7\5\j\0\v\6\0\8\4\j\i\0\5\h\o\t\u\r\2\3\a\4\d\r\m\1\f\w\y\w\q\4\k\y\g\1\5\y\5\7\p\q\p\h\a\r\9\c\m\j\a\t\f\f\c\4\l\r\6\a\r\o\p\1\r\2\p\w\4\c\t\4\1\u\n\x\d\1\6\k\k\5\s\l\y\a\a\p\p\h\d\9\2\o\i\i\2\9\v\c\n\k\e\h\c\2\1\r\u\j\d\o\6\4\6\z\3\3\q\j\n\s\a\g\p\r\d\z\u\t\i\o\z\o\m\p\i\w\g\h\x\j\t\n\l\4\w\u\l\o\p\3\o\c\m\p\5\8\u\d\n\8\o\z\l\6\e\n\t\k\z\5\g\0\f\9\x\d\o\a\p\k\l\2\7\g\s\d\f\a\q\z\c\j\5\i\0\b\m\f\0\y\j\3\q\a\j\1\9\9\h\4\o\8\k\c\u\w\r\9\j\p\b\r\f\a\d\q\r\n\z\c\l\h\2\t\i\y\p\9\j\t\p\k\b\l\p\v\c\z\1\x\w\b\p\z\v\w\y\o\l\1\v\o\z\v\c\v\n\s\h\b\j\o\n\1\q\f\x\q\8\t\h\h\w\u\d\2\v\6\e\q\5\s\m\9\9\v\0\x\7\4\7\r\f\i\k\2\d\o\z\x\8\a\s\g\h\2\f\l\y\t\i\a\0\e\h\2\z\x\v\l\f\b\n\0\n\q\j\7\w\s\t\a\2\k\v\3\5\g\v\s\4\o\b\i\j\q\r\4\l\g\1\2\y\o\q\f\v\y\6\o\e\1\a\0\5\9\j\f\f\3\v\j\o\p\r\t\c\o\c\8\c\7\w\k\a\7\i\a\v\5\v\s\j\b\g\p\q\y\0\l\1\o\1\8\e\7\i\p\q\w\l\7\y\l\6\x\j\z\k\e\w\p\p\i\z\m\0\2\c\4\r\k\o\n\n\d\8\6\6\o\z\4\1\m\h\i\h\h\0\d\j\r\n\k\f\9\q\5\z\0\k\c\v\b\r\k\n\i\c\q\i\m\6\q\k\8\u\5\g\u\q\2\m\m\1\y\e\c\h\v\y\3\c\e\o\w\f\6\w\u\3\q\f\u\d\t\2\w\9\6\n\g\s\u\y\j\j\4\f\z\h\e\5\3\z\u\v\w\i\2\c\v\0\1\2\l\o\6\f\2\j\u\3\o\k\q\y\z\8\7\5\m\r\y\8\b\l\u\4\i\m\e\e\a\i\4\2\5\s\k\s\2\w\r\w\p\n\0\s\e\y\h\q\k\f\4\w\c\t\u\j\t\8\k\l\e\a\9\o\y\n\s\k\o\p\f\p\c\n\t\q\a\b\t\t\b\2\x\k\c\e\2\j\n\k\n\q\0\f\o\x\n\y\4\b\e\5\s\d\e\h\f\f\p\h\v\r\f\m\2\p\j\6\u\j\4\l\q\y\s\l\k\0\b\i\9\i\x\b\p\6\m\1\t\u\p\j\2\h\0\d\d\f\6\m\6\i\c\q\3\9\e\z\4\p\y\5\a\1\g\6\5\g\i\z\z\6\8\d\i\3\i\r\m\j\y\i\h\4\i\5\i\z\c\z\g\k\0\b\o\5\t\a\k\6\x\x\u\8\e\5\j\t\b\c\p\r\j\b\8\y\u\e\x\t\f\3\c\n\q\v\6\l\p\y\u\s\h\q\k\u\i\m\1\g\7\2\n\s\5\q\u\6\7\2\1\1\n\b\s\k\4\w\w\z\g\c\0\t\0\o\d\q\0\3\5\v\c\y\g\e\2\z\9\r\s\v\y\j\c\f\4\h\0\0\v\2\o\t\o\a\w\c\b\6\s\6\s\y\t\6\g\2\8\3\v\t\d\k\j\b\5\e\f\h\t\s\r\n\d\d\o\e\d\e\b\0\p\v\9\u\v\8\y\p\y\8\3\l\k\n\i\r\a\q\8\x\z\3\9\b\y\g\x\2\8\o\0\5\9\f\i\6\v\a\0\z\0\2\v\j\r\n\p\v\s\t\c\r\k\s\u\9\8\j\0\n\p\v\a\o\8\s\8\l\f\m\w\q\h\0\b\e\8\k\8\c\d\k\0\6\w\9\7\o\x\z\n\5\8\w\g\e\j\1\q\v\8\o\i\g\f\3\3\w\8\b\5\3\r\2\j\p\6\j\q\j\t\q\y\m\c\b\p\c\6\x\2\v\w\x\o\i\n\8\8\m\7\f\y\g\q\0\b\z\n\e\2\u\y\s\y\t\z\y\i\4\4\n\g\k\z\v\c\l\u\6\r\1\h\8\j\u\j\e\n\q\1\d\m\1\w\7\y\u\m\r\p\l\3\q\0\9\d\e\t\r\6\b\3\i\l\x\c\n\y\2\j\k\r\e\l\i\x\5\5\7\u\v\2\9\m\0\a\6\h\u\5\a\5\9\f\1\2\m\q\t\n\7\e\u\o\e\3\2\y\y\c\z\9\8\z\c\h\w\d\l\n\2\6\l\7\p\n\c\x\r\p\2\y\o\d\x\y\6\n\p\f\l\z\1\d\y\o\0\f\d\m\3\p\p\o\3\o\p\i\j\l\h\s\w\8\a\8\c\w\a\d\9\6\o\p\n\3\d\y\g\e\z\p\v\f\m\5\x\v\4\q\o\6\5\l\i\b\o\w\x\p\l\l\z\4\t\n\p\q\c\t\0\1\o\5\u\3\7\l\j\a\d\6\k\c\c\t\h\j\d\9\l\3\k\a\3\n\u\b\h\q\r\k\e\4\b\c\b\p\v\r\j\u\s\i\e\3\q\d\a\i\3\o\j\g\0\w\k\6\2\e\z\c\a\q\p\k\1\0\v\6\0\0\7\0\0\2\b\3\r\t\u\d\i\q\d\y\3\p\2\8\p\k\q\m\6\w\v\4\3\j\o\n\d\u\k\c\9\g\1\a\2\e\l\i\c\m\4\h\0\g\7\i\f\n\v\v\8\4\t\x\u\a\u\f\5\6\s\g\d\0\9\q\i\o\y\b\r\t\3\w\i\b\1\l\1\b\v\f\t\f\v\6\m\f\w\m\q\k\x\t\v\q\8\6\h\z\e\8\k\p\q\8\h\w\y\i\3\l\z\q\4\l\k\d\x\c\6\c\r\2\7\q\l\r\4\h\7\3\n\3\4\o\y\d\k\k\a\s\3\1\7\2\l\h\v\4\x\u\3\o\e\0\d\9\1\j\z\3\w\d\d\u\s\d\x\n\o\e\6\h\8\4\0\q\7\g\k\5\1\n\u\r\0\8\8\h\i\w\8\q\i\u\j\w\k\a\w\1\c\t\x\d\d\4\c\s\f\h\4\3\v\5\t\0\u\f\u\b\9\9\v\5\w\p\l\y\9\r\4\y\s\c\y\r\y\k\v\0\j\i\h\o\h\8\a\e\r\5\o\q\p\p\s\e\q\v\n\7\2\w\a\r\r\5\o\f\w\7\k\i\i\s\y\0\8\m\o\s\i\c\2\i\g\l\3\5\u\l\s\a\5\8\p\2\q\r\1\1\b\m\v\1\m\u\x\u\9\6\x\4\n\y\b\7\v\4\c\v\s\b\x\l\9\b\c\n\z\s\v\q\4\o\v\y\k\x\k\g\j\k\h\e\d\x\f\t\d\u\6\t\6\0\9\4\7\d\n\m\6\2\2\o\8\o\0\9\9\y\x\9\r\2\7\r\c\q\1\d\e\9\k\a\t\v\w\s\1\m\9\o\y\5\s\8\w\s\y\4\l\m\y\x\e\o\7\w\j\e\x\8\g\i\p\t\a\o\a\s\d\1\a\k\p\t\f\3\p\g\1\3\y\d\q\4\8\3\1\r\v\2\y\d\b\3\k\z\q\q\s\z\g\5\d\l\x\d\n\1\f\o\h\i\p\8\2\d\8\x\d\3\e\5\n\5\4\3\y\2\0\t\f\r\y\w\y\l\2\x\7\v\f\9\i\b\o\3\5\b\4\n\f\j\a\0\7\r\0\c\x\3\q\k\n\s\c\7\a\7\b\t\4\5\7\5\c\6\i\1\c\a\k\j\w\r\h\e\c\6\p\v\7\v\1\t\2\b\g\d\4\z\w\x\3\v\k\1\0\8\2\5\b\k\d\p\z\i\l\d\k\f\m\d\u\h\4\6\u\o\a\t\n\n\u\f\m\u\3\6\h\z\v\w\9\d\m\8\t\o\8\n\v\y\7\z\3\r\4\c\a\d\o\c\3\c\u\6\z\x\1\i\c\h\l\p\8\y\s\h\d\w\0\n\5\2\k\e\j\b\g\t\3\c\m\o\0\w\i\t\o\l\c\q\0\l\u\o\k\z\t\a\5\q\9\x\2\i\w\n\y\7\2\t\l\n\8\1\m\m\a\3\k\4\3\4\c\6\o\i\b\e\3\j\0\u\i\0\f\u\0\a\7\x\b\p\5\9\1\3\r\5\d\2\p\v\q\d\s\m\e\g\0\7\o\x\5\0\s\2\z\i\6\i\4\x\o\q\i\g\p\t\n\7\x\4\3\i\t\9\q\f\x\2\r\b\m\5\b\q\9\t\r\s\j\d\4\u\y\n\9\5\p\m\k\e\e\s\7\l\8\x\c\c\g\f\i\o\q\6\x\a\m\i\r\3\o\w\n\d\3\i\a\8\f\u\y\0\v\b\r\y\k\6\v\2\g\q\i\d\z\h\v\y\6\x\c\s\7\8\l\e\d\v\4\y\a\c\p\e\r\t\4\8\c\6\l\0\h\6\b\v\4\s\4\c\7\z\7\w\j\5\k\3\d\r\p\s\j\q\0\p\d\f\7\m\l\n\u\j\1\t\a\h\q\c\o\z\j\n\h\9\1\f\x\q\o\g\9\d\d\j\z\u\4\k\d\w\3\o\z\7\j\p\g\u\m\6\t\c\w\6\1\k\3\p\j\m\k\x\t\z\u\l\z\5\w\q\w\n\y\8\w\k\f\r\c\e\3\a\n\f\l\4\2\y\g\p\r\a\n\s\p\b\8\u\g\1\i\1\1\k\f\3\t\9\n\0\l\3\z\8\e\x\z\9\7\t\0\4\u\k\b\p\u\8\2\l\x\p\q\v\k\e\w\o\j\5\6\5\q\6\7\4\c\j\3\h\t\d\x\t\b\x\p\v\e\4\v\5\l\p\n\l\c\r\3\h\3\0\1\9\c\f\p\t\a\3\6\7\q\o\l\q\s\n\i\e\5\u\4\o\l\w\q\e\n\k\o\r\8\f\b\x\i\x\i\y\j\c\e\n\y\q\4\a\f\9\v\q\3\x\t\a\g\z\f\u\b\r\9\p\b\b\u\r\f\w\e\6\i\w\p\a\m\v\k\x\7\6\w\0\6\x\w\b\x\6\e\k\7\r\g\q\x\j\g\y\n\z\b\9\e\4\3\a\f\8\6\q\a\c\h\k\s\p\x\o\e\o\0\o\q\i\s\1\9\p\x\i\g\j\a\t\r\o\u\l\0\z\f\h\4\m\5\q\q\l\p\r\7\7\2\r\m\d\d\h\l\j\0\8\x\b\0\z\4\i\m\v\6\n\a\m\3 ]] 00:27:07.585 00:27:07.585 real 0m3.615s 00:27:07.585 user 0m2.897s 00:27:07.585 sys 0m0.562s 00:27:07.585 07:27:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:07.585 07:27:41 -- common/autotest_common.sh@10 -- # set +x 00:27:07.585 07:27:41 -- dd/basic_rw.sh@1 -- # cleanup 00:27:07.585 07:27:41 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:27:07.585 07:27:41 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:27:07.585 07:27:41 -- dd/common.sh@11 -- # local nvme_ref= 00:27:07.585 07:27:41 -- dd/common.sh@12 -- # local size=0xffff 00:27:07.585 07:27:41 -- dd/common.sh@14 -- # local bs=1048576 00:27:07.585 07:27:41 -- dd/common.sh@15 -- # local count=1 00:27:07.585 07:27:41 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:27:07.585 07:27:41 -- dd/common.sh@18 -- # gen_conf 00:27:07.585 07:27:41 -- dd/common.sh@31 -- # xtrace_disable 00:27:07.585 07:27:41 -- common/autotest_common.sh@10 -- # set +x 00:27:07.585 { 00:27:07.585 "subsystems": [ 00:27:07.585 { 00:27:07.585 "subsystem": "bdev", 00:27:07.585 "config": [ 00:27:07.585 { 00:27:07.585 "params": { 00:27:07.585 "trtype": "pcie", 00:27:07.585 "traddr": "0000:00:06.0", 00:27:07.585 "name": "Nvme0" 00:27:07.585 }, 00:27:07.585 "method": "bdev_nvme_attach_controller" 00:27:07.585 }, 00:27:07.585 { 00:27:07.585 "method": "bdev_wait_for_examine" 00:27:07.585 } 00:27:07.585 ] 00:27:07.585 } 00:27:07.585 ] 00:27:07.585 } 00:27:07.585 [2024-02-13 07:27:41.130784] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:07.585 [2024-02-13 07:27:41.130967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139805 ] 00:27:07.843 [2024-02-13 07:27:41.297598] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.843 [2024-02-13 07:27:41.475660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.843 [2024-02-13 07:27:41.475809] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:27:08.409  Copying: 1024/1024 [kB] (average 500 MBps)[2024-02-13 07:27:41.830262] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:27:09.398 00:27:09.398 00:27:09.398 07:27:42 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:09.398 00:27:09.398 real 0m42.465s 00:27:09.398 user 0m34.560s 00:27:09.398 sys 0m6.310s 00:27:09.398 07:27:42 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:09.398 ************************************ 00:27:09.398 END TEST spdk_dd_basic_rw 00:27:09.398 ************************************ 00:27:09.398 07:27:42 -- common/autotest_common.sh@10 -- # set +x 00:27:09.398 07:27:42 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:27:09.398 07:27:42 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:27:09.398 07:27:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:27:09.398 07:27:42 -- common/autotest_common.sh@10 -- # set +x 00:27:09.398 ************************************ 00:27:09.398 START TEST spdk_dd_posix 00:27:09.398 ************************************ 00:27:09.398 07:27:42 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:27:09.398 * Looking for test storage... 00:27:09.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:27:09.398 07:27:42 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:09.398 07:27:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.398 07:27:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.398 07:27:42 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:27:09.398 07:27:42 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:27:09.398 07:27:42 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:27:09.398 07:27:42 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:27:09.398 07:27:42 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:09.398 07:27:42 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:09.398 07:27:42 -- dd/posix.sh@130 -- # tests 00:27:09.398 07:27:42 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO' 00:27:09.398 * First test run, using AIO 00:27:09.398 07:27:42 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:27:09.398 07:27:42 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:27:09.398 07:27:42 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:27:09.398 07:27:42 -- common/autotest_common.sh@10 -- # set +x 00:27:09.398 ************************************ 00:27:09.398 START TEST dd_flag_append 00:27:09.398 ************************************ 00:27:09.398 07:27:42 -- common/autotest_common.sh@1102 -- # append 00:27:09.398 07:27:42 -- dd/posix.sh@16 -- # local dump0 00:27:09.398 07:27:42 -- dd/posix.sh@17 -- # local dump1 00:27:09.398 07:27:42 -- dd/posix.sh@19 -- # gen_bytes 32 00:27:09.398 07:27:42 -- dd/common.sh@98 -- # xtrace_disable 00:27:09.398 07:27:42 -- common/autotest_common.sh@10 -- # set +x 00:27:09.398 07:27:42 -- dd/posix.sh@19 -- # dump0=lrxc8so47zzawyi4czu5sgiaavg0aa94 00:27:09.398 07:27:42 -- dd/posix.sh@20 -- # gen_bytes 32 00:27:09.398 07:27:42 -- dd/common.sh@98 -- # xtrace_disable 00:27:09.398 07:27:42 -- common/autotest_common.sh@10 -- # set +x 00:27:09.398 07:27:42 -- dd/posix.sh@20 -- # dump1=w4tc1tgt39v256hx9uzq42b2znfhijht 00:27:09.398 07:27:42 -- dd/posix.sh@22 -- # printf %s lrxc8so47zzawyi4czu5sgiaavg0aa94 00:27:09.398 07:27:42 -- dd/posix.sh@23 -- # printf %s w4tc1tgt39v256hx9uzq42b2znfhijht 00:27:09.398 07:27:42 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:27:09.398 [2024-02-13 07:27:42.992989] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:09.398 [2024-02-13 07:27:42.993468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139881 ] 00:27:09.657 [2024-02-13 07:27:43.162430] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.916 [2024-02-13 07:27:43.378106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.112  Copying: 32/32 [B] (average 31 kBps) 00:27:11.112 00:27:11.112 07:27:44 -- dd/posix.sh@27 -- # [[ w4tc1tgt39v256hx9uzq42b2znfhijhtlrxc8so47zzawyi4czu5sgiaavg0aa94 == \w\4\t\c\1\t\g\t\3\9\v\2\5\6\h\x\9\u\z\q\4\2\b\2\z\n\f\h\i\j\h\t\l\r\x\c\8\s\o\4\7\z\z\a\w\y\i\4\c\z\u\5\s\g\i\a\a\v\g\0\a\a\9\4 ]] 00:27:11.112 00:27:11.112 real 0m1.799s 00:27:11.112 user 0m1.361s 00:27:11.112 sys 0m0.273s 00:27:11.112 07:27:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:11.112 07:27:44 -- common/autotest_common.sh@10 -- # set +x 00:27:11.112 ************************************ 00:27:11.112 END TEST dd_flag_append 00:27:11.112 ************************************ 00:27:11.112 07:27:44 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:27:11.112 07:27:44 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:27:11.112 07:27:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:27:11.112 07:27:44 -- common/autotest_common.sh@10 -- # set +x 00:27:11.112 ************************************ 00:27:11.112 START TEST dd_flag_directory 00:27:11.112 ************************************ 00:27:11.112 07:27:44 -- common/autotest_common.sh@1102 -- # directory 00:27:11.112 07:27:44 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:11.112 07:27:44 -- common/autotest_common.sh@638 -- # local es=0 00:27:11.112 07:27:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:11.112 07:27:44 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:11.112 07:27:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:11.112 07:27:44 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:11.112 07:27:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:11.112 07:27:44 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:11.112 07:27:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:11.112 07:27:44 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:11.112 07:27:44 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:11.112 07:27:44 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:11.371 [2024-02-13 07:27:44.851651] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:11.371 [2024-02-13 07:27:44.852052] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139934 ] 00:27:11.371 [2024-02-13 07:27:45.019986] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.629 [2024-02-13 07:27:45.196072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.888 [2024-02-13 07:27:45.483066] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:11.888 [2024-02-13 07:27:45.483172] spdk_dd.c:1068:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:11.888 [2024-02-13 07:27:45.483217] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:12.454 [2024-02-13 07:27:46.127168] spdk_dd.c:1517:main: *ERROR*: Error occurred while performing copy 00:27:13.022 07:27:46 -- common/autotest_common.sh@641 -- # es=236 00:27:13.022 07:27:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:13.022 07:27:46 -- common/autotest_common.sh@650 -- # es=108 00:27:13.022 07:27:46 -- common/autotest_common.sh@651 -- # case "$es" in 00:27:13.022 07:27:46 -- common/autotest_common.sh@658 -- # es=1 00:27:13.022 07:27:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:13.022 07:27:46 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:13.022 07:27:46 -- common/autotest_common.sh@638 -- # local es=0 00:27:13.022 07:27:46 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:13.022 07:27:46 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.022 07:27:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:13.022 07:27:46 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.022 07:27:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:13.022 07:27:46 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.022 07:27:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:13.022 07:27:46 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.022 07:27:46 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:13.022 07:27:46 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:13.022 [2024-02-13 07:27:46.574558] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:13.022 [2024-02-13 07:27:46.575029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid139973 ] 00:27:13.281 [2024-02-13 07:27:46.742839] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.281 [2024-02-13 07:27:46.921274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.539 [2024-02-13 07:27:47.210024] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:13.539 [2024-02-13 07:27:47.210109] spdk_dd.c:1117:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:13.539 [2024-02-13 07:27:47.210158] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:14.481 [2024-02-13 07:27:47.845387] spdk_dd.c:1517:main: *ERROR*: Error occurred while performing copy 00:27:14.739 07:27:48 -- common/autotest_common.sh@641 -- # es=236 00:27:14.739 07:27:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:14.739 07:27:48 -- common/autotest_common.sh@650 -- # es=108 00:27:14.739 ************************************ 00:27:14.739 07:27:48 -- common/autotest_common.sh@651 -- # case "$es" in 00:27:14.739 07:27:48 -- common/autotest_common.sh@658 -- # es=1 00:27:14.739 07:27:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:14.739 00:27:14.739 real 0m3.425s 00:27:14.739 user 0m2.721s 00:27:14.739 sys 0m0.498s 00:27:14.739 07:27:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:14.739 07:27:48 -- common/autotest_common.sh@10 -- # set +x 00:27:14.739 END TEST dd_flag_directory 00:27:14.739 ************************************ 00:27:14.739 07:27:48 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:27:14.739 07:27:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:27:14.739 07:27:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:27:14.739 07:27:48 -- common/autotest_common.sh@10 -- # set +x 00:27:14.739 ************************************ 00:27:14.739 START TEST dd_flag_nofollow 00:27:14.739 ************************************ 00:27:14.739 07:27:48 -- common/autotest_common.sh@1102 -- # nofollow 00:27:14.739 07:27:48 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:14.739 07:27:48 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:14.739 07:27:48 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:14.739 07:27:48 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:14.739 07:27:48 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:14.739 07:27:48 -- common/autotest_common.sh@638 -- # local es=0 00:27:14.739 07:27:48 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:14.739 07:27:48 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.739 07:27:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:14.740 07:27:48 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.740 07:27:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:14.740 07:27:48 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.740 07:27:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:14.740 07:27:48 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:14.740 07:27:48 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:14.740 07:27:48 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:14.740 [2024-02-13 07:27:48.327463] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:14.740 [2024-02-13 07:27:48.327929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140017 ] 00:27:14.998 [2024-02-13 07:27:48.496776] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.998 [2024-02-13 07:27:48.674430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.565 [2024-02-13 07:27:48.956296] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:15.565 [2024-02-13 07:27:48.956391] spdk_dd.c:1068:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:15.565 [2024-02-13 07:27:48.956418] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:16.133 [2024-02-13 07:27:49.598044] spdk_dd.c:1517:main: *ERROR*: Error occurred while performing copy 00:27:16.392 07:27:49 -- common/autotest_common.sh@641 -- # es=216 00:27:16.392 07:27:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:16.392 07:27:49 -- common/autotest_common.sh@650 -- # es=88 00:27:16.392 07:27:49 -- common/autotest_common.sh@651 -- # case "$es" in 00:27:16.392 07:27:49 -- common/autotest_common.sh@658 -- # es=1 00:27:16.392 07:27:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:16.392 07:27:49 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:16.392 07:27:49 -- common/autotest_common.sh@638 -- # local es=0 00:27:16.392 07:27:49 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:16.392 07:27:49 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.392 07:27:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:16.392 07:27:49 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.392 07:27:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:16.392 07:27:49 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.392 07:27:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:16.392 07:27:49 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:16.392 07:27:49 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:16.392 07:27:49 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:16.392 [2024-02-13 07:27:50.039968] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:16.392 [2024-02-13 07:27:50.040179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140050 ] 00:27:16.651 [2024-02-13 07:27:50.207822] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.910 [2024-02-13 07:27:50.387633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.169 [2024-02-13 07:27:50.674782] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:17.169 [2024-02-13 07:27:50.674886] spdk_dd.c:1117:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:17.169 [2024-02-13 07:27:50.674918] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:17.737 [2024-02-13 07:27:51.313883] spdk_dd.c:1517:main: *ERROR*: Error occurred while performing copy 00:27:17.996 07:27:51 -- common/autotest_common.sh@641 -- # es=216 00:27:17.996 07:27:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:17.996 07:27:51 -- common/autotest_common.sh@650 -- # es=88 00:27:17.996 07:27:51 -- common/autotest_common.sh@651 -- # case "$es" in 00:27:17.996 07:27:51 -- common/autotest_common.sh@658 -- # es=1 00:27:17.996 07:27:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:17.997 07:27:51 -- dd/posix.sh@46 -- # gen_bytes 512 00:27:17.997 07:27:51 -- dd/common.sh@98 -- # xtrace_disable 00:27:17.997 07:27:51 -- common/autotest_common.sh@10 -- # set +x 00:27:17.997 07:27:51 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:18.256 [2024-02-13 07:27:51.738478] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:18.256 [2024-02-13 07:27:51.738653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140072 ] 00:27:18.256 [2024-02-13 07:27:51.888676] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.514 [2024-02-13 07:27:52.074632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.710  Copying: 512/512 [B] (average 500 kBps) 00:27:19.710 00:27:19.710 07:27:53 -- dd/posix.sh@49 -- # [[ c8bst6ooqpdlx62b5pahl3sq851rwcqimrlfqnyruknsczdreveo3yxt8re0dtr56xxdr1d7rf282a86mq8ixtvnavaw10mpk9fesye1gknnqii4xomgye17qot3ef01ktx01cdtl9gza8wlcot6sby2p73iod7p2rn8ypbunv9czg1vydbx8chriafxlsbxjty476fgcg7wovpfsu9pw18337afiu46eoh9c85hsp5le3imtcotrap7c2z8rr4mzjeuf24t6sfbu62gvno93gpn0xmjleuu2fwzpuumpb7vzmy35h6jdtdajn9rrzbq6dzpaf1itysfulhpkjvrhmej87ojooq4qsouz3gcnu2s6163irzrn97xjtwrvcc94hhjx8xjo4s70yald9l8pdpjwx28gbklrbeora1fexvumb30ad1lehbrsfe42n6tpbhkk5jg3387zttqry5mmmr1ac51vwilv86jdz2kkx4mloe1c8kiyawaunlya56v == \c\8\b\s\t\6\o\o\q\p\d\l\x\6\2\b\5\p\a\h\l\3\s\q\8\5\1\r\w\c\q\i\m\r\l\f\q\n\y\r\u\k\n\s\c\z\d\r\e\v\e\o\3\y\x\t\8\r\e\0\d\t\r\5\6\x\x\d\r\1\d\7\r\f\2\8\2\a\8\6\m\q\8\i\x\t\v\n\a\v\a\w\1\0\m\p\k\9\f\e\s\y\e\1\g\k\n\n\q\i\i\4\x\o\m\g\y\e\1\7\q\o\t\3\e\f\0\1\k\t\x\0\1\c\d\t\l\9\g\z\a\8\w\l\c\o\t\6\s\b\y\2\p\7\3\i\o\d\7\p\2\r\n\8\y\p\b\u\n\v\9\c\z\g\1\v\y\d\b\x\8\c\h\r\i\a\f\x\l\s\b\x\j\t\y\4\7\6\f\g\c\g\7\w\o\v\p\f\s\u\9\p\w\1\8\3\3\7\a\f\i\u\4\6\e\o\h\9\c\8\5\h\s\p\5\l\e\3\i\m\t\c\o\t\r\a\p\7\c\2\z\8\r\r\4\m\z\j\e\u\f\2\4\t\6\s\f\b\u\6\2\g\v\n\o\9\3\g\p\n\0\x\m\j\l\e\u\u\2\f\w\z\p\u\u\m\p\b\7\v\z\m\y\3\5\h\6\j\d\t\d\a\j\n\9\r\r\z\b\q\6\d\z\p\a\f\1\i\t\y\s\f\u\l\h\p\k\j\v\r\h\m\e\j\8\7\o\j\o\o\q\4\q\s\o\u\z\3\g\c\n\u\2\s\6\1\6\3\i\r\z\r\n\9\7\x\j\t\w\r\v\c\c\9\4\h\h\j\x\8\x\j\o\4\s\7\0\y\a\l\d\9\l\8\p\d\p\j\w\x\2\8\g\b\k\l\r\b\e\o\r\a\1\f\e\x\v\u\m\b\3\0\a\d\1\l\e\h\b\r\s\f\e\4\2\n\6\t\p\b\h\k\k\5\j\g\3\3\8\7\z\t\t\q\r\y\5\m\m\m\r\1\a\c\5\1\v\w\i\l\v\8\6\j\d\z\2\k\k\x\4\m\l\o\e\1\c\8\k\i\y\a\w\a\u\n\l\y\a\5\6\v ]] 00:27:19.710 00:27:19.710 real 0m5.131s 00:27:19.710 user 0m3.983s 00:27:19.710 sys 0m0.817s 00:27:19.710 07:27:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:19.710 07:27:53 -- common/autotest_common.sh@10 -- # set +x 00:27:19.710 ************************************ 00:27:19.710 END TEST dd_flag_nofollow 00:27:19.710 ************************************ 00:27:19.969 07:27:53 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:27:19.969 07:27:53 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:27:19.969 07:27:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:27:19.969 07:27:53 -- common/autotest_common.sh@10 -- # set +x 00:27:19.969 ************************************ 00:27:19.969 START TEST dd_flag_noatime 00:27:19.969 ************************************ 00:27:19.969 07:27:53 -- common/autotest_common.sh@1102 -- # noatime 00:27:19.969 07:27:53 -- dd/posix.sh@53 -- # local atime_if 00:27:19.969 07:27:53 -- dd/posix.sh@54 -- # local atime_of 00:27:19.969 07:27:53 -- dd/posix.sh@58 -- # gen_bytes 512 00:27:19.969 07:27:53 -- dd/common.sh@98 -- # xtrace_disable 00:27:19.969 07:27:53 -- common/autotest_common.sh@10 -- # set +x 00:27:19.969 07:27:53 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:19.969 07:27:53 -- dd/posix.sh@60 -- # atime_if=1707809272 00:27:19.969 07:27:53 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:19.969 07:27:53 -- dd/posix.sh@61 -- # atime_of=1707809273 00:27:19.969 07:27:53 -- dd/posix.sh@66 -- # sleep 1 00:27:20.910 07:27:54 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:20.910 [2024-02-13 07:27:54.513515] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:20.910 [2024-02-13 07:27:54.513678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140129 ] 00:27:21.169 [2024-02-13 07:27:54.664437] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.428 [2024-02-13 07:27:54.867075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.624  Copying: 512/512 [B] (average 500 kBps) 00:27:22.624 00:27:22.624 07:27:56 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:22.624 07:27:56 -- dd/posix.sh@69 -- # (( atime_if == 1707809272 )) 00:27:22.624 07:27:56 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:22.624 07:27:56 -- dd/posix.sh@70 -- # (( atime_of == 1707809273 )) 00:27:22.624 07:27:56 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:22.624 [2024-02-13 07:27:56.261226] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:22.624 [2024-02-13 07:27:56.261381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140160 ] 00:27:22.883 [2024-02-13 07:27:56.409957] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.142 [2024-02-13 07:27:56.596257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.336  Copying: 512/512 [B] (average 500 kBps) 00:27:24.336 00:27:24.336 07:27:57 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:24.336 07:27:57 -- dd/posix.sh@73 -- # (( atime_if < 1707809276 )) 00:27:24.336 00:27:24.336 real 0m4.538s 00:27:24.336 user 0m2.727s 00:27:24.336 sys 0m0.508s 00:27:24.336 07:27:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:24.336 ************************************ 00:27:24.336 END TEST dd_flag_noatime 00:27:24.336 ************************************ 00:27:24.336 07:27:57 -- common/autotest_common.sh@10 -- # set +x 00:27:24.336 07:27:58 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:27:24.336 07:27:58 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:27:24.336 07:27:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:27:24.336 07:27:58 -- common/autotest_common.sh@10 -- # set +x 00:27:24.336 ************************************ 00:27:24.336 START TEST dd_flags_misc 00:27:24.336 ************************************ 00:27:24.336 07:27:58 -- common/autotest_common.sh@1102 -- # io 00:27:24.336 07:27:58 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:27:24.336 07:27:58 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:27:24.336 07:27:58 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:27:24.336 07:27:58 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:24.336 07:27:58 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:24.336 07:27:58 -- dd/common.sh@98 -- # xtrace_disable 00:27:24.336 07:27:58 -- common/autotest_common.sh@10 -- # set +x 00:27:24.594 07:27:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:24.594 07:27:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:24.594 [2024-02-13 07:27:58.082336] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:24.594 [2024-02-13 07:27:58.082496] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140224 ] 00:27:24.594 [2024-02-13 07:27:58.231740] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.853 [2024-02-13 07:27:58.430461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.045  Copying: 512/512 [B] (average 500 kBps) 00:27:26.045 00:27:26.304 07:27:59 -- dd/posix.sh@93 -- # [[ 3xhbmsv0yxmz5ouqf9u7ix8zewblkho8cwvs90ecjy31m0slqrm2q8qyc5auqiygwtq5uwaq2176kxin65ykud8p03zht6nnqo98sx5bgvv8md1ifq22ja6lps3z57dip9x6n2dghxu4dqtx68fl3ag6uidub4vbsnpnji0n2tk81166greo61lbbqehceaoxpb2fv0bgajub3cqy53nmzynpu4urpsfqns53xv5am7gcrap0qqhwkcyyglo4gb34btiykwv35f5nmyrxprlcgv6crsinapxvu2q99kwh0w6x60ry93g2cmnun9f9g3t93le7d656gyp5sxpp6rxp0roh3jo8eco7oik85b30ju43gu3vldgalesigesxp1wbbz8lf39g8kafdjaklvagnfkwfp41p1e6sav88o3uq81xynwtp09juou5a5cf3y6exq5lkajw6q0rw0hqt67u18sf943xe6jc35eqvqp6o6oplurh598euq3ozah4qki == \3\x\h\b\m\s\v\0\y\x\m\z\5\o\u\q\f\9\u\7\i\x\8\z\e\w\b\l\k\h\o\8\c\w\v\s\9\0\e\c\j\y\3\1\m\0\s\l\q\r\m\2\q\8\q\y\c\5\a\u\q\i\y\g\w\t\q\5\u\w\a\q\2\1\7\6\k\x\i\n\6\5\y\k\u\d\8\p\0\3\z\h\t\6\n\n\q\o\9\8\s\x\5\b\g\v\v\8\m\d\1\i\f\q\2\2\j\a\6\l\p\s\3\z\5\7\d\i\p\9\x\6\n\2\d\g\h\x\u\4\d\q\t\x\6\8\f\l\3\a\g\6\u\i\d\u\b\4\v\b\s\n\p\n\j\i\0\n\2\t\k\8\1\1\6\6\g\r\e\o\6\1\l\b\b\q\e\h\c\e\a\o\x\p\b\2\f\v\0\b\g\a\j\u\b\3\c\q\y\5\3\n\m\z\y\n\p\u\4\u\r\p\s\f\q\n\s\5\3\x\v\5\a\m\7\g\c\r\a\p\0\q\q\h\w\k\c\y\y\g\l\o\4\g\b\3\4\b\t\i\y\k\w\v\3\5\f\5\n\m\y\r\x\p\r\l\c\g\v\6\c\r\s\i\n\a\p\x\v\u\2\q\9\9\k\w\h\0\w\6\x\6\0\r\y\9\3\g\2\c\m\n\u\n\9\f\9\g\3\t\9\3\l\e\7\d\6\5\6\g\y\p\5\s\x\p\p\6\r\x\p\0\r\o\h\3\j\o\8\e\c\o\7\o\i\k\8\5\b\3\0\j\u\4\3\g\u\3\v\l\d\g\a\l\e\s\i\g\e\s\x\p\1\w\b\b\z\8\l\f\3\9\g\8\k\a\f\d\j\a\k\l\v\a\g\n\f\k\w\f\p\4\1\p\1\e\6\s\a\v\8\8\o\3\u\q\8\1\x\y\n\w\t\p\0\9\j\u\o\u\5\a\5\c\f\3\y\6\e\x\q\5\l\k\a\j\w\6\q\0\r\w\0\h\q\t\6\7\u\1\8\s\f\9\4\3\x\e\6\j\c\3\5\e\q\v\q\p\6\o\6\o\p\l\u\r\h\5\9\8\e\u\q\3\o\z\a\h\4\q\k\i ]] 00:27:26.304 07:27:59 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:26.304 07:27:59 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:27:26.304 [2024-02-13 07:27:59.810690] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:26.304 [2024-02-13 07:27:59.811127] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140257 ] 00:27:26.304 [2024-02-13 07:27:59.977332] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.561 [2024-02-13 07:28:00.151979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.199  Copying: 512/512 [B] (average 500 kBps) 00:27:28.199 00:27:28.199 07:28:01 -- dd/posix.sh@93 -- # [[ 3xhbmsv0yxmz5ouqf9u7ix8zewblkho8cwvs90ecjy31m0slqrm2q8qyc5auqiygwtq5uwaq2176kxin65ykud8p03zht6nnqo98sx5bgvv8md1ifq22ja6lps3z57dip9x6n2dghxu4dqtx68fl3ag6uidub4vbsnpnji0n2tk81166greo61lbbqehceaoxpb2fv0bgajub3cqy53nmzynpu4urpsfqns53xv5am7gcrap0qqhwkcyyglo4gb34btiykwv35f5nmyrxprlcgv6crsinapxvu2q99kwh0w6x60ry93g2cmnun9f9g3t93le7d656gyp5sxpp6rxp0roh3jo8eco7oik85b30ju43gu3vldgalesigesxp1wbbz8lf39g8kafdjaklvagnfkwfp41p1e6sav88o3uq81xynwtp09juou5a5cf3y6exq5lkajw6q0rw0hqt67u18sf943xe6jc35eqvqp6o6oplurh598euq3ozah4qki == \3\x\h\b\m\s\v\0\y\x\m\z\5\o\u\q\f\9\u\7\i\x\8\z\e\w\b\l\k\h\o\8\c\w\v\s\9\0\e\c\j\y\3\1\m\0\s\l\q\r\m\2\q\8\q\y\c\5\a\u\q\i\y\g\w\t\q\5\u\w\a\q\2\1\7\6\k\x\i\n\6\5\y\k\u\d\8\p\0\3\z\h\t\6\n\n\q\o\9\8\s\x\5\b\g\v\v\8\m\d\1\i\f\q\2\2\j\a\6\l\p\s\3\z\5\7\d\i\p\9\x\6\n\2\d\g\h\x\u\4\d\q\t\x\6\8\f\l\3\a\g\6\u\i\d\u\b\4\v\b\s\n\p\n\j\i\0\n\2\t\k\8\1\1\6\6\g\r\e\o\6\1\l\b\b\q\e\h\c\e\a\o\x\p\b\2\f\v\0\b\g\a\j\u\b\3\c\q\y\5\3\n\m\z\y\n\p\u\4\u\r\p\s\f\q\n\s\5\3\x\v\5\a\m\7\g\c\r\a\p\0\q\q\h\w\k\c\y\y\g\l\o\4\g\b\3\4\b\t\i\y\k\w\v\3\5\f\5\n\m\y\r\x\p\r\l\c\g\v\6\c\r\s\i\n\a\p\x\v\u\2\q\9\9\k\w\h\0\w\6\x\6\0\r\y\9\3\g\2\c\m\n\u\n\9\f\9\g\3\t\9\3\l\e\7\d\6\5\6\g\y\p\5\s\x\p\p\6\r\x\p\0\r\o\h\3\j\o\8\e\c\o\7\o\i\k\8\5\b\3\0\j\u\4\3\g\u\3\v\l\d\g\a\l\e\s\i\g\e\s\x\p\1\w\b\b\z\8\l\f\3\9\g\8\k\a\f\d\j\a\k\l\v\a\g\n\f\k\w\f\p\4\1\p\1\e\6\s\a\v\8\8\o\3\u\q\8\1\x\y\n\w\t\p\0\9\j\u\o\u\5\a\5\c\f\3\y\6\e\x\q\5\l\k\a\j\w\6\q\0\r\w\0\h\q\t\6\7\u\1\8\s\f\9\4\3\x\e\6\j\c\3\5\e\q\v\q\p\6\o\6\o\p\l\u\r\h\5\9\8\e\u\q\3\o\z\a\h\4\q\k\i ]] 00:27:28.199 07:28:01 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:28.199 07:28:01 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:27:28.199 [2024-02-13 07:28:01.550684] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:28.199 [2024-02-13 07:28:01.550913] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140274 ] 00:27:28.199 [2024-02-13 07:28:01.720556] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.458 [2024-02-13 07:28:01.904266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.660  Copying: 512/512 [B] (average 125 kBps) 00:27:29.660 00:27:29.660 07:28:03 -- dd/posix.sh@93 -- # [[ 3xhbmsv0yxmz5ouqf9u7ix8zewblkho8cwvs90ecjy31m0slqrm2q8qyc5auqiygwtq5uwaq2176kxin65ykud8p03zht6nnqo98sx5bgvv8md1ifq22ja6lps3z57dip9x6n2dghxu4dqtx68fl3ag6uidub4vbsnpnji0n2tk81166greo61lbbqehceaoxpb2fv0bgajub3cqy53nmzynpu4urpsfqns53xv5am7gcrap0qqhwkcyyglo4gb34btiykwv35f5nmyrxprlcgv6crsinapxvu2q99kwh0w6x60ry93g2cmnun9f9g3t93le7d656gyp5sxpp6rxp0roh3jo8eco7oik85b30ju43gu3vldgalesigesxp1wbbz8lf39g8kafdjaklvagnfkwfp41p1e6sav88o3uq81xynwtp09juou5a5cf3y6exq5lkajw6q0rw0hqt67u18sf943xe6jc35eqvqp6o6oplurh598euq3ozah4qki == \3\x\h\b\m\s\v\0\y\x\m\z\5\o\u\q\f\9\u\7\i\x\8\z\e\w\b\l\k\h\o\8\c\w\v\s\9\0\e\c\j\y\3\1\m\0\s\l\q\r\m\2\q\8\q\y\c\5\a\u\q\i\y\g\w\t\q\5\u\w\a\q\2\1\7\6\k\x\i\n\6\5\y\k\u\d\8\p\0\3\z\h\t\6\n\n\q\o\9\8\s\x\5\b\g\v\v\8\m\d\1\i\f\q\2\2\j\a\6\l\p\s\3\z\5\7\d\i\p\9\x\6\n\2\d\g\h\x\u\4\d\q\t\x\6\8\f\l\3\a\g\6\u\i\d\u\b\4\v\b\s\n\p\n\j\i\0\n\2\t\k\8\1\1\6\6\g\r\e\o\6\1\l\b\b\q\e\h\c\e\a\o\x\p\b\2\f\v\0\b\g\a\j\u\b\3\c\q\y\5\3\n\m\z\y\n\p\u\4\u\r\p\s\f\q\n\s\5\3\x\v\5\a\m\7\g\c\r\a\p\0\q\q\h\w\k\c\y\y\g\l\o\4\g\b\3\4\b\t\i\y\k\w\v\3\5\f\5\n\m\y\r\x\p\r\l\c\g\v\6\c\r\s\i\n\a\p\x\v\u\2\q\9\9\k\w\h\0\w\6\x\6\0\r\y\9\3\g\2\c\m\n\u\n\9\f\9\g\3\t\9\3\l\e\7\d\6\5\6\g\y\p\5\s\x\p\p\6\r\x\p\0\r\o\h\3\j\o\8\e\c\o\7\o\i\k\8\5\b\3\0\j\u\4\3\g\u\3\v\l\d\g\a\l\e\s\i\g\e\s\x\p\1\w\b\b\z\8\l\f\3\9\g\8\k\a\f\d\j\a\k\l\v\a\g\n\f\k\w\f\p\4\1\p\1\e\6\s\a\v\8\8\o\3\u\q\8\1\x\y\n\w\t\p\0\9\j\u\o\u\5\a\5\c\f\3\y\6\e\x\q\5\l\k\a\j\w\6\q\0\r\w\0\h\q\t\6\7\u\1\8\s\f\9\4\3\x\e\6\j\c\3\5\e\q\v\q\p\6\o\6\o\p\l\u\r\h\5\9\8\e\u\q\3\o\z\a\h\4\q\k\i ]] 00:27:29.660 07:28:03 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:29.660 07:28:03 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:27:29.660 [2024-02-13 07:28:03.295725] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:29.660 [2024-02-13 07:28:03.295916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140302 ] 00:27:29.918 [2024-02-13 07:28:03.464347] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.177 [2024-02-13 07:28:03.644768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.372  Copying: 512/512 [B] (average 500 kBps) 00:27:31.372 00:27:31.372 07:28:04 -- dd/posix.sh@93 -- # [[ 3xhbmsv0yxmz5ouqf9u7ix8zewblkho8cwvs90ecjy31m0slqrm2q8qyc5auqiygwtq5uwaq2176kxin65ykud8p03zht6nnqo98sx5bgvv8md1ifq22ja6lps3z57dip9x6n2dghxu4dqtx68fl3ag6uidub4vbsnpnji0n2tk81166greo61lbbqehceaoxpb2fv0bgajub3cqy53nmzynpu4urpsfqns53xv5am7gcrap0qqhwkcyyglo4gb34btiykwv35f5nmyrxprlcgv6crsinapxvu2q99kwh0w6x60ry93g2cmnun9f9g3t93le7d656gyp5sxpp6rxp0roh3jo8eco7oik85b30ju43gu3vldgalesigesxp1wbbz8lf39g8kafdjaklvagnfkwfp41p1e6sav88o3uq81xynwtp09juou5a5cf3y6exq5lkajw6q0rw0hqt67u18sf943xe6jc35eqvqp6o6oplurh598euq3ozah4qki == \3\x\h\b\m\s\v\0\y\x\m\z\5\o\u\q\f\9\u\7\i\x\8\z\e\w\b\l\k\h\o\8\c\w\v\s\9\0\e\c\j\y\3\1\m\0\s\l\q\r\m\2\q\8\q\y\c\5\a\u\q\i\y\g\w\t\q\5\u\w\a\q\2\1\7\6\k\x\i\n\6\5\y\k\u\d\8\p\0\3\z\h\t\6\n\n\q\o\9\8\s\x\5\b\g\v\v\8\m\d\1\i\f\q\2\2\j\a\6\l\p\s\3\z\5\7\d\i\p\9\x\6\n\2\d\g\h\x\u\4\d\q\t\x\6\8\f\l\3\a\g\6\u\i\d\u\b\4\v\b\s\n\p\n\j\i\0\n\2\t\k\8\1\1\6\6\g\r\e\o\6\1\l\b\b\q\e\h\c\e\a\o\x\p\b\2\f\v\0\b\g\a\j\u\b\3\c\q\y\5\3\n\m\z\y\n\p\u\4\u\r\p\s\f\q\n\s\5\3\x\v\5\a\m\7\g\c\r\a\p\0\q\q\h\w\k\c\y\y\g\l\o\4\g\b\3\4\b\t\i\y\k\w\v\3\5\f\5\n\m\y\r\x\p\r\l\c\g\v\6\c\r\s\i\n\a\p\x\v\u\2\q\9\9\k\w\h\0\w\6\x\6\0\r\y\9\3\g\2\c\m\n\u\n\9\f\9\g\3\t\9\3\l\e\7\d\6\5\6\g\y\p\5\s\x\p\p\6\r\x\p\0\r\o\h\3\j\o\8\e\c\o\7\o\i\k\8\5\b\3\0\j\u\4\3\g\u\3\v\l\d\g\a\l\e\s\i\g\e\s\x\p\1\w\b\b\z\8\l\f\3\9\g\8\k\a\f\d\j\a\k\l\v\a\g\n\f\k\w\f\p\4\1\p\1\e\6\s\a\v\8\8\o\3\u\q\8\1\x\y\n\w\t\p\0\9\j\u\o\u\5\a\5\c\f\3\y\6\e\x\q\5\l\k\a\j\w\6\q\0\r\w\0\h\q\t\6\7\u\1\8\s\f\9\4\3\x\e\6\j\c\3\5\e\q\v\q\p\6\o\6\o\p\l\u\r\h\5\9\8\e\u\q\3\o\z\a\h\4\q\k\i ]] 00:27:31.373 07:28:04 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:31.373 07:28:04 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:31.373 07:28:04 -- dd/common.sh@98 -- # xtrace_disable 00:27:31.373 07:28:04 -- common/autotest_common.sh@10 -- # set +x 00:27:31.373 07:28:04 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:31.373 07:28:04 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:31.373 [2024-02-13 07:28:05.025089] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:31.373 [2024-02-13 07:28:05.025242] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140327 ] 00:27:31.632 [2024-02-13 07:28:05.174609] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:31.890 [2024-02-13 07:28:05.352550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.088  Copying: 512/512 [B] (average 500 kBps) 00:27:33.089 00:27:33.089 07:28:06 -- dd/posix.sh@93 -- # [[ 63rdpi2hh6vlpnwfk20w28eontw85q6zku6ov7hxpdmy7xvg36tagk993ezbpqdlb4dwrx9itzgsl9zzbs6sl304xuj23hg40i9nwtjij3hosg5vo9f775wa44qd96t7wd205iwxyb60jqyiaw03am3wb7bead2y4vxcwr4crutrfqu1cc3gj94z0a5nbo1vl7e8raicmfz2wgbtb2zmv9tys0y07b5jag9swa6qo6j85idh1qt6r6ncikumxvjotsjmpq8babrgj8djsz41t4moof7qzgsu2092jnsffv9bfme9lub9sz0i2l9s6ypegdusgtw7s025v32hvzd80vxevcoas8s48kj7k5m9lypu3oj7flc52jhca24d99ypxno44yidzyywdw7epf8vlgd43j1uoof19yycbuie1wue2ypz3n490ipx60cql4bxjlagdoh7t863mae42lyzxobx8v1s9t714tk1jlpjpv9rdw931hl1ranif6snh1zo == \6\3\r\d\p\i\2\h\h\6\v\l\p\n\w\f\k\2\0\w\2\8\e\o\n\t\w\8\5\q\6\z\k\u\6\o\v\7\h\x\p\d\m\y\7\x\v\g\3\6\t\a\g\k\9\9\3\e\z\b\p\q\d\l\b\4\d\w\r\x\9\i\t\z\g\s\l\9\z\z\b\s\6\s\l\3\0\4\x\u\j\2\3\h\g\4\0\i\9\n\w\t\j\i\j\3\h\o\s\g\5\v\o\9\f\7\7\5\w\a\4\4\q\d\9\6\t\7\w\d\2\0\5\i\w\x\y\b\6\0\j\q\y\i\a\w\0\3\a\m\3\w\b\7\b\e\a\d\2\y\4\v\x\c\w\r\4\c\r\u\t\r\f\q\u\1\c\c\3\g\j\9\4\z\0\a\5\n\b\o\1\v\l\7\e\8\r\a\i\c\m\f\z\2\w\g\b\t\b\2\z\m\v\9\t\y\s\0\y\0\7\b\5\j\a\g\9\s\w\a\6\q\o\6\j\8\5\i\d\h\1\q\t\6\r\6\n\c\i\k\u\m\x\v\j\o\t\s\j\m\p\q\8\b\a\b\r\g\j\8\d\j\s\z\4\1\t\4\m\o\o\f\7\q\z\g\s\u\2\0\9\2\j\n\s\f\f\v\9\b\f\m\e\9\l\u\b\9\s\z\0\i\2\l\9\s\6\y\p\e\g\d\u\s\g\t\w\7\s\0\2\5\v\3\2\h\v\z\d\8\0\v\x\e\v\c\o\a\s\8\s\4\8\k\j\7\k\5\m\9\l\y\p\u\3\o\j\7\f\l\c\5\2\j\h\c\a\2\4\d\9\9\y\p\x\n\o\4\4\y\i\d\z\y\y\w\d\w\7\e\p\f\8\v\l\g\d\4\3\j\1\u\o\o\f\1\9\y\y\c\b\u\i\e\1\w\u\e\2\y\p\z\3\n\4\9\0\i\p\x\6\0\c\q\l\4\b\x\j\l\a\g\d\o\h\7\t\8\6\3\m\a\e\4\2\l\y\z\x\o\b\x\8\v\1\s\9\t\7\1\4\t\k\1\j\l\p\j\p\v\9\r\d\w\9\3\1\h\l\1\r\a\n\i\f\6\s\n\h\1\z\o ]] 00:27:33.089 07:28:06 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:33.089 07:28:06 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:27:33.089 [2024-02-13 07:28:06.713842] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:33.089 [2024-02-13 07:28:06.714023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140344 ] 00:27:33.347 [2024-02-13 07:28:06.867907] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.605 [2024-02-13 07:28:07.078874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.826  Copying: 512/512 [B] (average 500 kBps) 00:27:34.826 00:27:34.826 07:28:08 -- dd/posix.sh@93 -- # [[ 63rdpi2hh6vlpnwfk20w28eontw85q6zku6ov7hxpdmy7xvg36tagk993ezbpqdlb4dwrx9itzgsl9zzbs6sl304xuj23hg40i9nwtjij3hosg5vo9f775wa44qd96t7wd205iwxyb60jqyiaw03am3wb7bead2y4vxcwr4crutrfqu1cc3gj94z0a5nbo1vl7e8raicmfz2wgbtb2zmv9tys0y07b5jag9swa6qo6j85idh1qt6r6ncikumxvjotsjmpq8babrgj8djsz41t4moof7qzgsu2092jnsffv9bfme9lub9sz0i2l9s6ypegdusgtw7s025v32hvzd80vxevcoas8s48kj7k5m9lypu3oj7flc52jhca24d99ypxno44yidzyywdw7epf8vlgd43j1uoof19yycbuie1wue2ypz3n490ipx60cql4bxjlagdoh7t863mae42lyzxobx8v1s9t714tk1jlpjpv9rdw931hl1ranif6snh1zo == \6\3\r\d\p\i\2\h\h\6\v\l\p\n\w\f\k\2\0\w\2\8\e\o\n\t\w\8\5\q\6\z\k\u\6\o\v\7\h\x\p\d\m\y\7\x\v\g\3\6\t\a\g\k\9\9\3\e\z\b\p\q\d\l\b\4\d\w\r\x\9\i\t\z\g\s\l\9\z\z\b\s\6\s\l\3\0\4\x\u\j\2\3\h\g\4\0\i\9\n\w\t\j\i\j\3\h\o\s\g\5\v\o\9\f\7\7\5\w\a\4\4\q\d\9\6\t\7\w\d\2\0\5\i\w\x\y\b\6\0\j\q\y\i\a\w\0\3\a\m\3\w\b\7\b\e\a\d\2\y\4\v\x\c\w\r\4\c\r\u\t\r\f\q\u\1\c\c\3\g\j\9\4\z\0\a\5\n\b\o\1\v\l\7\e\8\r\a\i\c\m\f\z\2\w\g\b\t\b\2\z\m\v\9\t\y\s\0\y\0\7\b\5\j\a\g\9\s\w\a\6\q\o\6\j\8\5\i\d\h\1\q\t\6\r\6\n\c\i\k\u\m\x\v\j\o\t\s\j\m\p\q\8\b\a\b\r\g\j\8\d\j\s\z\4\1\t\4\m\o\o\f\7\q\z\g\s\u\2\0\9\2\j\n\s\f\f\v\9\b\f\m\e\9\l\u\b\9\s\z\0\i\2\l\9\s\6\y\p\e\g\d\u\s\g\t\w\7\s\0\2\5\v\3\2\h\v\z\d\8\0\v\x\e\v\c\o\a\s\8\s\4\8\k\j\7\k\5\m\9\l\y\p\u\3\o\j\7\f\l\c\5\2\j\h\c\a\2\4\d\9\9\y\p\x\n\o\4\4\y\i\d\z\y\y\w\d\w\7\e\p\f\8\v\l\g\d\4\3\j\1\u\o\o\f\1\9\y\y\c\b\u\i\e\1\w\u\e\2\y\p\z\3\n\4\9\0\i\p\x\6\0\c\q\l\4\b\x\j\l\a\g\d\o\h\7\t\8\6\3\m\a\e\4\2\l\y\z\x\o\b\x\8\v\1\s\9\t\7\1\4\t\k\1\j\l\p\j\p\v\9\r\d\w\9\3\1\h\l\1\r\a\n\i\f\6\s\n\h\1\z\o ]] 00:27:34.826 07:28:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:34.826 07:28:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:27:34.826 [2024-02-13 07:28:08.473554] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:34.826 [2024-02-13 07:28:08.473738] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140398 ] 00:27:35.084 [2024-02-13 07:28:08.640782] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.343 [2024-02-13 07:28:08.821572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.536  Copying: 512/512 [B] (average 250 kBps) 00:27:36.536 00:27:36.536 07:28:10 -- dd/posix.sh@93 -- # [[ 63rdpi2hh6vlpnwfk20w28eontw85q6zku6ov7hxpdmy7xvg36tagk993ezbpqdlb4dwrx9itzgsl9zzbs6sl304xuj23hg40i9nwtjij3hosg5vo9f775wa44qd96t7wd205iwxyb60jqyiaw03am3wb7bead2y4vxcwr4crutrfqu1cc3gj94z0a5nbo1vl7e8raicmfz2wgbtb2zmv9tys0y07b5jag9swa6qo6j85idh1qt6r6ncikumxvjotsjmpq8babrgj8djsz41t4moof7qzgsu2092jnsffv9bfme9lub9sz0i2l9s6ypegdusgtw7s025v32hvzd80vxevcoas8s48kj7k5m9lypu3oj7flc52jhca24d99ypxno44yidzyywdw7epf8vlgd43j1uoof19yycbuie1wue2ypz3n490ipx60cql4bxjlagdoh7t863mae42lyzxobx8v1s9t714tk1jlpjpv9rdw931hl1ranif6snh1zo == \6\3\r\d\p\i\2\h\h\6\v\l\p\n\w\f\k\2\0\w\2\8\e\o\n\t\w\8\5\q\6\z\k\u\6\o\v\7\h\x\p\d\m\y\7\x\v\g\3\6\t\a\g\k\9\9\3\e\z\b\p\q\d\l\b\4\d\w\r\x\9\i\t\z\g\s\l\9\z\z\b\s\6\s\l\3\0\4\x\u\j\2\3\h\g\4\0\i\9\n\w\t\j\i\j\3\h\o\s\g\5\v\o\9\f\7\7\5\w\a\4\4\q\d\9\6\t\7\w\d\2\0\5\i\w\x\y\b\6\0\j\q\y\i\a\w\0\3\a\m\3\w\b\7\b\e\a\d\2\y\4\v\x\c\w\r\4\c\r\u\t\r\f\q\u\1\c\c\3\g\j\9\4\z\0\a\5\n\b\o\1\v\l\7\e\8\r\a\i\c\m\f\z\2\w\g\b\t\b\2\z\m\v\9\t\y\s\0\y\0\7\b\5\j\a\g\9\s\w\a\6\q\o\6\j\8\5\i\d\h\1\q\t\6\r\6\n\c\i\k\u\m\x\v\j\o\t\s\j\m\p\q\8\b\a\b\r\g\j\8\d\j\s\z\4\1\t\4\m\o\o\f\7\q\z\g\s\u\2\0\9\2\j\n\s\f\f\v\9\b\f\m\e\9\l\u\b\9\s\z\0\i\2\l\9\s\6\y\p\e\g\d\u\s\g\t\w\7\s\0\2\5\v\3\2\h\v\z\d\8\0\v\x\e\v\c\o\a\s\8\s\4\8\k\j\7\k\5\m\9\l\y\p\u\3\o\j\7\f\l\c\5\2\j\h\c\a\2\4\d\9\9\y\p\x\n\o\4\4\y\i\d\z\y\y\w\d\w\7\e\p\f\8\v\l\g\d\4\3\j\1\u\o\o\f\1\9\y\y\c\b\u\i\e\1\w\u\e\2\y\p\z\3\n\4\9\0\i\p\x\6\0\c\q\l\4\b\x\j\l\a\g\d\o\h\7\t\8\6\3\m\a\e\4\2\l\y\z\x\o\b\x\8\v\1\s\9\t\7\1\4\t\k\1\j\l\p\j\p\v\9\r\d\w\9\3\1\h\l\1\r\a\n\i\f\6\s\n\h\1\z\o ]] 00:27:36.536 07:28:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:36.536 07:28:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:27:36.536 [2024-02-13 07:28:10.216034] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:36.536 [2024-02-13 07:28:10.216222] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140423 ] 00:27:36.795 [2024-02-13 07:28:10.382021] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.054 [2024-02-13 07:28:10.568287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.249  Copying: 512/512 [B] (average 250 kBps) 00:27:38.249 00:27:38.250 ************************************ 00:27:38.250 END TEST dd_flags_misc 00:27:38.250 ************************************ 00:27:38.250 07:28:11 -- dd/posix.sh@93 -- # [[ 63rdpi2hh6vlpnwfk20w28eontw85q6zku6ov7hxpdmy7xvg36tagk993ezbpqdlb4dwrx9itzgsl9zzbs6sl304xuj23hg40i9nwtjij3hosg5vo9f775wa44qd96t7wd205iwxyb60jqyiaw03am3wb7bead2y4vxcwr4crutrfqu1cc3gj94z0a5nbo1vl7e8raicmfz2wgbtb2zmv9tys0y07b5jag9swa6qo6j85idh1qt6r6ncikumxvjotsjmpq8babrgj8djsz41t4moof7qzgsu2092jnsffv9bfme9lub9sz0i2l9s6ypegdusgtw7s025v32hvzd80vxevcoas8s48kj7k5m9lypu3oj7flc52jhca24d99ypxno44yidzyywdw7epf8vlgd43j1uoof19yycbuie1wue2ypz3n490ipx60cql4bxjlagdoh7t863mae42lyzxobx8v1s9t714tk1jlpjpv9rdw931hl1ranif6snh1zo == \6\3\r\d\p\i\2\h\h\6\v\l\p\n\w\f\k\2\0\w\2\8\e\o\n\t\w\8\5\q\6\z\k\u\6\o\v\7\h\x\p\d\m\y\7\x\v\g\3\6\t\a\g\k\9\9\3\e\z\b\p\q\d\l\b\4\d\w\r\x\9\i\t\z\g\s\l\9\z\z\b\s\6\s\l\3\0\4\x\u\j\2\3\h\g\4\0\i\9\n\w\t\j\i\j\3\h\o\s\g\5\v\o\9\f\7\7\5\w\a\4\4\q\d\9\6\t\7\w\d\2\0\5\i\w\x\y\b\6\0\j\q\y\i\a\w\0\3\a\m\3\w\b\7\b\e\a\d\2\y\4\v\x\c\w\r\4\c\r\u\t\r\f\q\u\1\c\c\3\g\j\9\4\z\0\a\5\n\b\o\1\v\l\7\e\8\r\a\i\c\m\f\z\2\w\g\b\t\b\2\z\m\v\9\t\y\s\0\y\0\7\b\5\j\a\g\9\s\w\a\6\q\o\6\j\8\5\i\d\h\1\q\t\6\r\6\n\c\i\k\u\m\x\v\j\o\t\s\j\m\p\q\8\b\a\b\r\g\j\8\d\j\s\z\4\1\t\4\m\o\o\f\7\q\z\g\s\u\2\0\9\2\j\n\s\f\f\v\9\b\f\m\e\9\l\u\b\9\s\z\0\i\2\l\9\s\6\y\p\e\g\d\u\s\g\t\w\7\s\0\2\5\v\3\2\h\v\z\d\8\0\v\x\e\v\c\o\a\s\8\s\4\8\k\j\7\k\5\m\9\l\y\p\u\3\o\j\7\f\l\c\5\2\j\h\c\a\2\4\d\9\9\y\p\x\n\o\4\4\y\i\d\z\y\y\w\d\w\7\e\p\f\8\v\l\g\d\4\3\j\1\u\o\o\f\1\9\y\y\c\b\u\i\e\1\w\u\e\2\y\p\z\3\n\4\9\0\i\p\x\6\0\c\q\l\4\b\x\j\l\a\g\d\o\h\7\t\8\6\3\m\a\e\4\2\l\y\z\x\o\b\x\8\v\1\s\9\t\7\1\4\t\k\1\j\l\p\j\p\v\9\r\d\w\9\3\1\h\l\1\r\a\n\i\f\6\s\n\h\1\z\o ]] 00:27:38.250 00:27:38.250 real 0m13.871s 00:27:38.250 user 0m10.750s 00:27:38.250 sys 0m2.035s 00:27:38.250 07:28:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:38.250 07:28:11 -- common/autotest_common.sh@10 -- # set +x 00:27:38.250 07:28:11 -- dd/posix.sh@131 -- # tests_forced_aio 00:27:38.250 07:28:11 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO' 00:27:38.250 * Second test run, using AIO 00:27:38.250 07:28:11 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:27:38.250 07:28:11 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:27:38.250 07:28:11 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:27:38.250 07:28:11 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:27:38.250 07:28:11 -- common/autotest_common.sh@10 -- # set +x 00:27:38.509 ************************************ 00:27:38.509 START TEST dd_flag_append_forced_aio 00:27:38.509 ************************************ 00:27:38.509 07:28:11 -- common/autotest_common.sh@1102 -- # append 00:27:38.509 07:28:11 -- dd/posix.sh@16 -- # local dump0 00:27:38.509 07:28:11 -- dd/posix.sh@17 -- # local dump1 00:27:38.509 07:28:11 -- dd/posix.sh@19 -- # gen_bytes 32 00:27:38.509 07:28:11 -- dd/common.sh@98 -- # xtrace_disable 00:27:38.509 07:28:11 -- common/autotest_common.sh@10 -- # set +x 00:27:38.509 07:28:11 -- dd/posix.sh@19 -- # dump0=7fo0uheas1dibj0z82ojnoyg0rdutktk 00:27:38.509 07:28:11 -- dd/posix.sh@20 -- # gen_bytes 32 00:27:38.509 07:28:11 -- dd/common.sh@98 -- # xtrace_disable 00:27:38.509 07:28:11 -- common/autotest_common.sh@10 -- # set +x 00:27:38.509 07:28:11 -- dd/posix.sh@20 -- # dump1=yt8dhepgaxwdwu9hvb7wxlid2zri46h4 00:27:38.509 07:28:11 -- dd/posix.sh@22 -- # printf %s 7fo0uheas1dibj0z82ojnoyg0rdutktk 00:27:38.509 07:28:11 -- dd/posix.sh@23 -- # printf %s yt8dhepgaxwdwu9hvb7wxlid2zri46h4 00:27:38.509 07:28:11 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:27:38.509 [2024-02-13 07:28:12.018126] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:38.509 [2024-02-13 07:28:12.018314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140468 ] 00:27:38.509 [2024-02-13 07:28:12.185279] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.767 [2024-02-13 07:28:12.379026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.402  Copying: 32/32 [B] (average 31 kBps) 00:27:40.402 00:27:40.402 07:28:13 -- dd/posix.sh@27 -- # [[ yt8dhepgaxwdwu9hvb7wxlid2zri46h47fo0uheas1dibj0z82ojnoyg0rdutktk == \y\t\8\d\h\e\p\g\a\x\w\d\w\u\9\h\v\b\7\w\x\l\i\d\2\z\r\i\4\6\h\4\7\f\o\0\u\h\e\a\s\1\d\i\b\j\0\z\8\2\o\j\n\o\y\g\0\r\d\u\t\k\t\k ]] 00:27:40.402 00:27:40.402 real 0m1.765s 00:27:40.402 user 0m1.374s 00:27:40.402 sys 0m0.252s 00:27:40.402 07:28:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:40.402 07:28:13 -- common/autotest_common.sh@10 -- # set +x 00:27:40.402 ************************************ 00:27:40.402 END TEST dd_flag_append_forced_aio 00:27:40.402 ************************************ 00:27:40.402 07:28:13 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:27:40.402 07:28:13 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:27:40.402 07:28:13 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:27:40.402 07:28:13 -- common/autotest_common.sh@10 -- # set +x 00:27:40.402 ************************************ 00:27:40.402 START TEST dd_flag_directory_forced_aio 00:27:40.402 ************************************ 00:27:40.402 07:28:13 -- common/autotest_common.sh@1102 -- # directory 00:27:40.402 07:28:13 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:40.402 07:28:13 -- common/autotest_common.sh@638 -- # local es=0 00:27:40.402 07:28:13 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:40.402 07:28:13 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:40.402 07:28:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:40.402 07:28:13 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:40.402 07:28:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:40.402 07:28:13 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:40.402 07:28:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:40.402 07:28:13 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:40.402 07:28:13 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:40.402 07:28:13 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:40.402 [2024-02-13 07:28:13.818284] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:40.402 [2024-02-13 07:28:13.818459] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140514 ] 00:27:40.402 [2024-02-13 07:28:13.971029] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.661 [2024-02-13 07:28:14.152763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.920 [2024-02-13 07:28:14.432179] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:40.920 [2024-02-13 07:28:14.432271] spdk_dd.c:1068:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:40.920 [2024-02-13 07:28:14.432297] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:41.487 [2024-02-13 07:28:15.068691] spdk_dd.c:1517:main: *ERROR*: Error occurred while performing copy 00:27:41.746 07:28:15 -- common/autotest_common.sh@641 -- # es=236 00:27:41.746 07:28:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:41.746 07:28:15 -- common/autotest_common.sh@650 -- # es=108 00:27:41.746 07:28:15 -- common/autotest_common.sh@651 -- # case "$es" in 00:27:41.746 07:28:15 -- common/autotest_common.sh@658 -- # es=1 00:27:41.746 07:28:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:41.746 07:28:15 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:41.746 07:28:15 -- common/autotest_common.sh@638 -- # local es=0 00:27:41.746 07:28:15 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:41.746 07:28:15 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:41.746 07:28:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:41.746 07:28:15 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:41.746 07:28:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:41.746 07:28:15 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:41.746 07:28:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:41.746 07:28:15 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:41.746 07:28:15 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:41.746 07:28:15 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:27:42.004 [2024-02-13 07:28:15.491554] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:42.004 [2024-02-13 07:28:15.491755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140542 ] 00:27:42.004 [2024-02-13 07:28:15.655875] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.263 [2024-02-13 07:28:15.837127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.522 [2024-02-13 07:28:16.128709] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:42.522 [2024-02-13 07:28:16.128813] spdk_dd.c:1117:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:27:42.522 [2024-02-13 07:28:16.128844] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:43.090 [2024-02-13 07:28:16.758830] spdk_dd.c:1517:main: *ERROR*: Error occurred while performing copy 00:27:43.656 07:28:17 -- common/autotest_common.sh@641 -- # es=236 00:27:43.656 07:28:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:43.656 07:28:17 -- common/autotest_common.sh@650 -- # es=108 00:27:43.656 07:28:17 -- common/autotest_common.sh@651 -- # case "$es" in 00:27:43.656 07:28:17 -- common/autotest_common.sh@658 -- # es=1 00:27:43.656 07:28:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:43.656 00:27:43.656 real 0m3.362s 00:27:43.656 user 0m2.687s 00:27:43.656 sys 0m0.475s 00:27:43.656 ************************************ 00:27:43.656 END TEST dd_flag_directory_forced_aio 00:27:43.656 ************************************ 00:27:43.656 07:28:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:43.656 07:28:17 -- common/autotest_common.sh@10 -- # set +x 00:27:43.656 07:28:17 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:27:43.656 07:28:17 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:27:43.656 07:28:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:27:43.656 07:28:17 -- common/autotest_common.sh@10 -- # set +x 00:27:43.656 ************************************ 00:27:43.656 START TEST dd_flag_nofollow_forced_aio 00:27:43.656 ************************************ 00:27:43.656 07:28:17 -- common/autotest_common.sh@1102 -- # nofollow 00:27:43.656 07:28:17 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:43.656 07:28:17 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:43.656 07:28:17 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:27:43.656 07:28:17 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:27:43.656 07:28:17 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:43.657 07:28:17 -- common/autotest_common.sh@638 -- # local es=0 00:27:43.657 07:28:17 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:43.657 07:28:17 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:43.657 07:28:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:43.657 07:28:17 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:43.657 07:28:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:43.657 07:28:17 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:43.657 07:28:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:43.657 07:28:17 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:43.657 07:28:17 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:43.657 07:28:17 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:43.657 [2024-02-13 07:28:17.248598] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:43.657 [2024-02-13 07:28:17.248942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140593 ] 00:27:43.915 [2024-02-13 07:28:17.416875] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.915 [2024-02-13 07:28:17.609001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:44.482 [2024-02-13 07:28:17.890903] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:44.482 [2024-02-13 07:28:17.891291] spdk_dd.c:1068:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:27:44.482 [2024-02-13 07:28:17.891352] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:45.049 [2024-02-13 07:28:18.525001] spdk_dd.c:1517:main: *ERROR*: Error occurred while performing copy 00:27:45.308 07:28:18 -- common/autotest_common.sh@641 -- # es=216 00:27:45.308 07:28:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:45.308 07:28:18 -- common/autotest_common.sh@650 -- # es=88 00:27:45.308 07:28:18 -- common/autotest_common.sh@651 -- # case "$es" in 00:27:45.308 07:28:18 -- common/autotest_common.sh@658 -- # es=1 00:27:45.309 07:28:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:45.309 07:28:18 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:45.309 07:28:18 -- common/autotest_common.sh@638 -- # local es=0 00:27:45.309 07:28:18 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:45.309 07:28:18 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:45.309 07:28:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:45.309 07:28:18 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:45.309 07:28:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:45.309 07:28:18 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:45.309 07:28:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:45.309 07:28:18 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:45.309 07:28:18 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:27:45.309 07:28:18 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:27:45.309 [2024-02-13 07:28:18.962817] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:45.309 [2024-02-13 07:28:18.963235] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140637 ] 00:27:45.579 [2024-02-13 07:28:19.129732] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.837 [2024-02-13 07:28:19.308005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.096 [2024-02-13 07:28:19.591274] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:46.096 [2024-02-13 07:28:19.591660] spdk_dd.c:1117:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:27:46.096 [2024-02-13 07:28:19.591722] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:46.663 [2024-02-13 07:28:20.228829] spdk_dd.c:1517:main: *ERROR*: Error occurred while performing copy 00:27:46.921 07:28:20 -- common/autotest_common.sh@641 -- # es=216 00:27:46.921 07:28:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:46.921 07:28:20 -- common/autotest_common.sh@650 -- # es=88 00:27:46.921 07:28:20 -- common/autotest_common.sh@651 -- # case "$es" in 00:27:46.921 07:28:20 -- common/autotest_common.sh@658 -- # es=1 00:27:46.921 07:28:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:46.921 07:28:20 -- dd/posix.sh@46 -- # gen_bytes 512 00:27:46.921 07:28:20 -- dd/common.sh@98 -- # xtrace_disable 00:27:46.921 07:28:20 -- common/autotest_common.sh@10 -- # set +x 00:27:46.921 07:28:20 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:47.180 [2024-02-13 07:28:20.649961] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:47.180 [2024-02-13 07:28:20.650416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140653 ] 00:27:47.180 [2024-02-13 07:28:20.805559] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.438 [2024-02-13 07:28:21.000581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.634  Copying: 512/512 [B] (average 500 kBps) 00:27:48.634 00:27:48.893 ************************************ 00:27:48.893 END TEST dd_flag_nofollow_forced_aio 00:27:48.893 ************************************ 00:27:48.893 07:28:22 -- dd/posix.sh@49 -- # [[ i89ris0do8qz1crqwohekpuvs0cnfylw160nwriyg0740s3yguy2lt14km81mnrgw501w7yo9uotbiao4jn5zyou3ii4ow0706b3l04jldun3v9goi5cgpp3u90oor8hms1qr0505bsvzs1t2v0koqrrbc71aemmc2es3a0d7vg4v2y96rez2hz2ra93hoy1i8ftqlkr2u3bq5308umryukbe4pn2z1jor0qnllsf39ie4tnws7z4ompq40gqnjdo26hznceglxycdltldpgst3x1m0f6c99a0aaifcuibqbkmbg31a37yz2cv5txfpbakr51wlraovgty96dj5suimei1gm75se0y6yv984bw2av14qwp6g3q61gtp0l5oug7sdhiddv1c436nl2tf4zy1orzwybm7gti6e2gd156782198be0305ul8urudvgyam83d5sz33r0mju2m5goc1aigzec16lxfjs8ooomp6nonrkdgkyb02leo9emnff7 == \i\8\9\r\i\s\0\d\o\8\q\z\1\c\r\q\w\o\h\e\k\p\u\v\s\0\c\n\f\y\l\w\1\6\0\n\w\r\i\y\g\0\7\4\0\s\3\y\g\u\y\2\l\t\1\4\k\m\8\1\m\n\r\g\w\5\0\1\w\7\y\o\9\u\o\t\b\i\a\o\4\j\n\5\z\y\o\u\3\i\i\4\o\w\0\7\0\6\b\3\l\0\4\j\l\d\u\n\3\v\9\g\o\i\5\c\g\p\p\3\u\9\0\o\o\r\8\h\m\s\1\q\r\0\5\0\5\b\s\v\z\s\1\t\2\v\0\k\o\q\r\r\b\c\7\1\a\e\m\m\c\2\e\s\3\a\0\d\7\v\g\4\v\2\y\9\6\r\e\z\2\h\z\2\r\a\9\3\h\o\y\1\i\8\f\t\q\l\k\r\2\u\3\b\q\5\3\0\8\u\m\r\y\u\k\b\e\4\p\n\2\z\1\j\o\r\0\q\n\l\l\s\f\3\9\i\e\4\t\n\w\s\7\z\4\o\m\p\q\4\0\g\q\n\j\d\o\2\6\h\z\n\c\e\g\l\x\y\c\d\l\t\l\d\p\g\s\t\3\x\1\m\0\f\6\c\9\9\a\0\a\a\i\f\c\u\i\b\q\b\k\m\b\g\3\1\a\3\7\y\z\2\c\v\5\t\x\f\p\b\a\k\r\5\1\w\l\r\a\o\v\g\t\y\9\6\d\j\5\s\u\i\m\e\i\1\g\m\7\5\s\e\0\y\6\y\v\9\8\4\b\w\2\a\v\1\4\q\w\p\6\g\3\q\6\1\g\t\p\0\l\5\o\u\g\7\s\d\h\i\d\d\v\1\c\4\3\6\n\l\2\t\f\4\z\y\1\o\r\z\w\y\b\m\7\g\t\i\6\e\2\g\d\1\5\6\7\8\2\1\9\8\b\e\0\3\0\5\u\l\8\u\r\u\d\v\g\y\a\m\8\3\d\5\s\z\3\3\r\0\m\j\u\2\m\5\g\o\c\1\a\i\g\z\e\c\1\6\l\x\f\j\s\8\o\o\o\m\p\6\n\o\n\r\k\d\g\k\y\b\0\2\l\e\o\9\e\m\n\f\f\7 ]] 00:27:48.893 00:27:48.893 real 0m5.157s 00:27:48.893 user 0m3.973s 00:27:48.893 sys 0m0.840s 00:27:48.893 07:28:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:48.893 07:28:22 -- common/autotest_common.sh@10 -- # set +x 00:27:48.893 07:28:22 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:27:48.893 07:28:22 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:27:48.893 07:28:22 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:27:48.893 07:28:22 -- common/autotest_common.sh@10 -- # set +x 00:27:48.893 ************************************ 00:27:48.893 START TEST dd_flag_noatime_forced_aio 00:27:48.893 ************************************ 00:27:48.893 07:28:22 -- common/autotest_common.sh@1102 -- # noatime 00:27:48.893 07:28:22 -- dd/posix.sh@53 -- # local atime_if 00:27:48.893 07:28:22 -- dd/posix.sh@54 -- # local atime_of 00:27:48.893 07:28:22 -- dd/posix.sh@58 -- # gen_bytes 512 00:27:48.893 07:28:22 -- dd/common.sh@98 -- # xtrace_disable 00:27:48.893 07:28:22 -- common/autotest_common.sh@10 -- # set +x 00:27:48.893 07:28:22 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:48.893 07:28:22 -- dd/posix.sh@60 -- # atime_if=1707809301 00:27:48.893 07:28:22 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:48.893 07:28:22 -- dd/posix.sh@61 -- # atime_of=1707809302 00:27:48.893 07:28:22 -- dd/posix.sh@66 -- # sleep 1 00:27:49.904 07:28:23 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:49.904 [2024-02-13 07:28:23.469940] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:49.904 [2024-02-13 07:28:23.470316] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140717 ] 00:27:50.162 [2024-02-13 07:28:23.624494] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.162 [2024-02-13 07:28:23.805492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.798  Copying: 512/512 [B] (average 500 kBps) 00:27:51.798 00:27:51.798 07:28:25 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:51.798 07:28:25 -- dd/posix.sh@69 -- # (( atime_if == 1707809301 )) 00:27:51.798 07:28:25 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:51.798 07:28:25 -- dd/posix.sh@70 -- # (( atime_of == 1707809302 )) 00:27:51.798 07:28:25 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:27:51.798 [2024-02-13 07:28:25.204352] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:51.798 [2024-02-13 07:28:25.204894] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140748 ] 00:27:51.798 [2024-02-13 07:28:25.372435] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.058 [2024-02-13 07:28:25.545003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.253  Copying: 512/512 [B] (average 500 kBps) 00:27:53.253 00:27:53.253 07:28:26 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:27:53.253 ************************************ 00:27:53.253 END TEST dd_flag_noatime_forced_aio 00:27:53.253 ************************************ 00:27:53.253 07:28:26 -- dd/posix.sh@73 -- # (( atime_if < 1707809305 )) 00:27:53.253 00:27:53.253 real 0m4.477s 00:27:53.253 user 0m2.738s 00:27:53.253 sys 0m0.468s 00:27:53.253 07:28:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:53.253 07:28:26 -- common/autotest_common.sh@10 -- # set +x 00:27:53.253 07:28:26 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:27:53.253 07:28:26 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:27:53.253 07:28:26 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:27:53.253 07:28:26 -- common/autotest_common.sh@10 -- # set +x 00:27:53.253 ************************************ 00:27:53.253 START TEST dd_flags_misc_forced_aio 00:27:53.253 ************************************ 00:27:53.253 07:28:26 -- common/autotest_common.sh@1102 -- # io 00:27:53.253 07:28:26 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:27:53.253 07:28:26 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:27:53.253 07:28:26 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:27:53.253 07:28:26 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:27:53.253 07:28:26 -- dd/posix.sh@86 -- # gen_bytes 512 00:27:53.253 07:28:26 -- dd/common.sh@98 -- # xtrace_disable 00:27:53.253 07:28:26 -- common/autotest_common.sh@10 -- # set +x 00:27:53.253 07:28:26 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:53.253 07:28:26 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:27:53.513 [2024-02-13 07:28:26.999363] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:53.513 [2024-02-13 07:28:26.999564] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140784 ] 00:27:53.513 [2024-02-13 07:28:27.165335] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.772 [2024-02-13 07:28:27.346175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.407  Copying: 512/512 [B] (average 500 kBps) 00:27:55.407 00:27:55.407 07:28:28 -- dd/posix.sh@93 -- # [[ fxzhdd5i1azzxjpqlhrhxzyux5zpamzcezydk2z26qpsmnt1gfy6atzx4th2g47yrq433514z8cbg3fmws2z0eekpl2eqhazjzb8ubtn3skis9v3cyhyq4do3a1o6ee6656mzmeiwadg9a4t9sbhe45bqhi43bug97u9113ba0r69fnb3qb80b05nj593kcaz9nmibw4pib6f7waa64e8elsw6p1xg3hx89rkctpooixcfzv1jh7xsg10zvwth061rdt1l0gvq5qbzeyljm5j36rtts2t8nwcy0nrhezhq7478kk87rmw6c2drn4ptuaad1o1hn1y40ashvr5cdkf0ncb4mj9tq79nbmq19ccvjo4rgk9ppkb85owzwhrp3db3n1zbsziwme1tebnqd4pq5u5inioromvv9akymkba3yqksxelswqj9i7fww2yf52xk46wxf53k4c07jx43hd1un36lxkww6hi1gi0lk8ypt1d9p4h37djzdzhvoyy8d == \f\x\z\h\d\d\5\i\1\a\z\z\x\j\p\q\l\h\r\h\x\z\y\u\x\5\z\p\a\m\z\c\e\z\y\d\k\2\z\2\6\q\p\s\m\n\t\1\g\f\y\6\a\t\z\x\4\t\h\2\g\4\7\y\r\q\4\3\3\5\1\4\z\8\c\b\g\3\f\m\w\s\2\z\0\e\e\k\p\l\2\e\q\h\a\z\j\z\b\8\u\b\t\n\3\s\k\i\s\9\v\3\c\y\h\y\q\4\d\o\3\a\1\o\6\e\e\6\6\5\6\m\z\m\e\i\w\a\d\g\9\a\4\t\9\s\b\h\e\4\5\b\q\h\i\4\3\b\u\g\9\7\u\9\1\1\3\b\a\0\r\6\9\f\n\b\3\q\b\8\0\b\0\5\n\j\5\9\3\k\c\a\z\9\n\m\i\b\w\4\p\i\b\6\f\7\w\a\a\6\4\e\8\e\l\s\w\6\p\1\x\g\3\h\x\8\9\r\k\c\t\p\o\o\i\x\c\f\z\v\1\j\h\7\x\s\g\1\0\z\v\w\t\h\0\6\1\r\d\t\1\l\0\g\v\q\5\q\b\z\e\y\l\j\m\5\j\3\6\r\t\t\s\2\t\8\n\w\c\y\0\n\r\h\e\z\h\q\7\4\7\8\k\k\8\7\r\m\w\6\c\2\d\r\n\4\p\t\u\a\a\d\1\o\1\h\n\1\y\4\0\a\s\h\v\r\5\c\d\k\f\0\n\c\b\4\m\j\9\t\q\7\9\n\b\m\q\1\9\c\c\v\j\o\4\r\g\k\9\p\p\k\b\8\5\o\w\z\w\h\r\p\3\d\b\3\n\1\z\b\s\z\i\w\m\e\1\t\e\b\n\q\d\4\p\q\5\u\5\i\n\i\o\r\o\m\v\v\9\a\k\y\m\k\b\a\3\y\q\k\s\x\e\l\s\w\q\j\9\i\7\f\w\w\2\y\f\5\2\x\k\4\6\w\x\f\5\3\k\4\c\0\7\j\x\4\3\h\d\1\u\n\3\6\l\x\k\w\w\6\h\i\1\g\i\0\l\k\8\y\p\t\1\d\9\p\4\h\3\7\d\j\z\d\z\h\v\o\y\y\8\d ]] 00:27:55.407 07:28:28 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:55.407 07:28:28 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:27:55.407 [2024-02-13 07:28:28.746480] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:55.407 [2024-02-13 07:28:28.746689] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140829 ] 00:27:55.408 [2024-02-13 07:28:28.910032] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:55.408 [2024-02-13 07:28:29.087184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.912  Copying: 512/512 [B] (average 500 kBps) 00:27:56.912 00:27:56.912 07:28:30 -- dd/posix.sh@93 -- # [[ fxzhdd5i1azzxjpqlhrhxzyux5zpamzcezydk2z26qpsmnt1gfy6atzx4th2g47yrq433514z8cbg3fmws2z0eekpl2eqhazjzb8ubtn3skis9v3cyhyq4do3a1o6ee6656mzmeiwadg9a4t9sbhe45bqhi43bug97u9113ba0r69fnb3qb80b05nj593kcaz9nmibw4pib6f7waa64e8elsw6p1xg3hx89rkctpooixcfzv1jh7xsg10zvwth061rdt1l0gvq5qbzeyljm5j36rtts2t8nwcy0nrhezhq7478kk87rmw6c2drn4ptuaad1o1hn1y40ashvr5cdkf0ncb4mj9tq79nbmq19ccvjo4rgk9ppkb85owzwhrp3db3n1zbsziwme1tebnqd4pq5u5inioromvv9akymkba3yqksxelswqj9i7fww2yf52xk46wxf53k4c07jx43hd1un36lxkww6hi1gi0lk8ypt1d9p4h37djzdzhvoyy8d == \f\x\z\h\d\d\5\i\1\a\z\z\x\j\p\q\l\h\r\h\x\z\y\u\x\5\z\p\a\m\z\c\e\z\y\d\k\2\z\2\6\q\p\s\m\n\t\1\g\f\y\6\a\t\z\x\4\t\h\2\g\4\7\y\r\q\4\3\3\5\1\4\z\8\c\b\g\3\f\m\w\s\2\z\0\e\e\k\p\l\2\e\q\h\a\z\j\z\b\8\u\b\t\n\3\s\k\i\s\9\v\3\c\y\h\y\q\4\d\o\3\a\1\o\6\e\e\6\6\5\6\m\z\m\e\i\w\a\d\g\9\a\4\t\9\s\b\h\e\4\5\b\q\h\i\4\3\b\u\g\9\7\u\9\1\1\3\b\a\0\r\6\9\f\n\b\3\q\b\8\0\b\0\5\n\j\5\9\3\k\c\a\z\9\n\m\i\b\w\4\p\i\b\6\f\7\w\a\a\6\4\e\8\e\l\s\w\6\p\1\x\g\3\h\x\8\9\r\k\c\t\p\o\o\i\x\c\f\z\v\1\j\h\7\x\s\g\1\0\z\v\w\t\h\0\6\1\r\d\t\1\l\0\g\v\q\5\q\b\z\e\y\l\j\m\5\j\3\6\r\t\t\s\2\t\8\n\w\c\y\0\n\r\h\e\z\h\q\7\4\7\8\k\k\8\7\r\m\w\6\c\2\d\r\n\4\p\t\u\a\a\d\1\o\1\h\n\1\y\4\0\a\s\h\v\r\5\c\d\k\f\0\n\c\b\4\m\j\9\t\q\7\9\n\b\m\q\1\9\c\c\v\j\o\4\r\g\k\9\p\p\k\b\8\5\o\w\z\w\h\r\p\3\d\b\3\n\1\z\b\s\z\i\w\m\e\1\t\e\b\n\q\d\4\p\q\5\u\5\i\n\i\o\r\o\m\v\v\9\a\k\y\m\k\b\a\3\y\q\k\s\x\e\l\s\w\q\j\9\i\7\f\w\w\2\y\f\5\2\x\k\4\6\w\x\f\5\3\k\4\c\0\7\j\x\4\3\h\d\1\u\n\3\6\l\x\k\w\w\6\h\i\1\g\i\0\l\k\8\y\p\t\1\d\9\p\4\h\3\7\d\j\z\d\z\h\v\o\y\y\8\d ]] 00:27:56.912 07:28:30 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:56.912 07:28:30 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:27:56.912 [2024-02-13 07:28:30.473273] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:56.912 [2024-02-13 07:28:30.473474] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140854 ] 00:27:57.172 [2024-02-13 07:28:30.641098] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.172 [2024-02-13 07:28:30.819040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.809  Copying: 512/512 [B] (average 250 kBps) 00:27:58.809 00:27:58.809 07:28:32 -- dd/posix.sh@93 -- # [[ fxzhdd5i1azzxjpqlhrhxzyux5zpamzcezydk2z26qpsmnt1gfy6atzx4th2g47yrq433514z8cbg3fmws2z0eekpl2eqhazjzb8ubtn3skis9v3cyhyq4do3a1o6ee6656mzmeiwadg9a4t9sbhe45bqhi43bug97u9113ba0r69fnb3qb80b05nj593kcaz9nmibw4pib6f7waa64e8elsw6p1xg3hx89rkctpooixcfzv1jh7xsg10zvwth061rdt1l0gvq5qbzeyljm5j36rtts2t8nwcy0nrhezhq7478kk87rmw6c2drn4ptuaad1o1hn1y40ashvr5cdkf0ncb4mj9tq79nbmq19ccvjo4rgk9ppkb85owzwhrp3db3n1zbsziwme1tebnqd4pq5u5inioromvv9akymkba3yqksxelswqj9i7fww2yf52xk46wxf53k4c07jx43hd1un36lxkww6hi1gi0lk8ypt1d9p4h37djzdzhvoyy8d == \f\x\z\h\d\d\5\i\1\a\z\z\x\j\p\q\l\h\r\h\x\z\y\u\x\5\z\p\a\m\z\c\e\z\y\d\k\2\z\2\6\q\p\s\m\n\t\1\g\f\y\6\a\t\z\x\4\t\h\2\g\4\7\y\r\q\4\3\3\5\1\4\z\8\c\b\g\3\f\m\w\s\2\z\0\e\e\k\p\l\2\e\q\h\a\z\j\z\b\8\u\b\t\n\3\s\k\i\s\9\v\3\c\y\h\y\q\4\d\o\3\a\1\o\6\e\e\6\6\5\6\m\z\m\e\i\w\a\d\g\9\a\4\t\9\s\b\h\e\4\5\b\q\h\i\4\3\b\u\g\9\7\u\9\1\1\3\b\a\0\r\6\9\f\n\b\3\q\b\8\0\b\0\5\n\j\5\9\3\k\c\a\z\9\n\m\i\b\w\4\p\i\b\6\f\7\w\a\a\6\4\e\8\e\l\s\w\6\p\1\x\g\3\h\x\8\9\r\k\c\t\p\o\o\i\x\c\f\z\v\1\j\h\7\x\s\g\1\0\z\v\w\t\h\0\6\1\r\d\t\1\l\0\g\v\q\5\q\b\z\e\y\l\j\m\5\j\3\6\r\t\t\s\2\t\8\n\w\c\y\0\n\r\h\e\z\h\q\7\4\7\8\k\k\8\7\r\m\w\6\c\2\d\r\n\4\p\t\u\a\a\d\1\o\1\h\n\1\y\4\0\a\s\h\v\r\5\c\d\k\f\0\n\c\b\4\m\j\9\t\q\7\9\n\b\m\q\1\9\c\c\v\j\o\4\r\g\k\9\p\p\k\b\8\5\o\w\z\w\h\r\p\3\d\b\3\n\1\z\b\s\z\i\w\m\e\1\t\e\b\n\q\d\4\p\q\5\u\5\i\n\i\o\r\o\m\v\v\9\a\k\y\m\k\b\a\3\y\q\k\s\x\e\l\s\w\q\j\9\i\7\f\w\w\2\y\f\5\2\x\k\4\6\w\x\f\5\3\k\4\c\0\7\j\x\4\3\h\d\1\u\n\3\6\l\x\k\w\w\6\h\i\1\g\i\0\l\k\8\y\p\t\1\d\9\p\4\h\3\7\d\j\z\d\z\h\v\o\y\y\8\d ]] 00:27:58.809 07:28:32 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:27:58.809 07:28:32 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:27:58.809 [2024-02-13 07:28:32.203666] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:27:58.809 [2024-02-13 07:28:32.203913] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140878 ] 00:27:58.809 [2024-02-13 07:28:32.370009] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.068 [2024-02-13 07:28:32.549755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:00.264  Copying: 512/512 [B] (average 250 kBps) 00:28:00.264 00:28:00.264 07:28:33 -- dd/posix.sh@93 -- # [[ fxzhdd5i1azzxjpqlhrhxzyux5zpamzcezydk2z26qpsmnt1gfy6atzx4th2g47yrq433514z8cbg3fmws2z0eekpl2eqhazjzb8ubtn3skis9v3cyhyq4do3a1o6ee6656mzmeiwadg9a4t9sbhe45bqhi43bug97u9113ba0r69fnb3qb80b05nj593kcaz9nmibw4pib6f7waa64e8elsw6p1xg3hx89rkctpooixcfzv1jh7xsg10zvwth061rdt1l0gvq5qbzeyljm5j36rtts2t8nwcy0nrhezhq7478kk87rmw6c2drn4ptuaad1o1hn1y40ashvr5cdkf0ncb4mj9tq79nbmq19ccvjo4rgk9ppkb85owzwhrp3db3n1zbsziwme1tebnqd4pq5u5inioromvv9akymkba3yqksxelswqj9i7fww2yf52xk46wxf53k4c07jx43hd1un36lxkww6hi1gi0lk8ypt1d9p4h37djzdzhvoyy8d == \f\x\z\h\d\d\5\i\1\a\z\z\x\j\p\q\l\h\r\h\x\z\y\u\x\5\z\p\a\m\z\c\e\z\y\d\k\2\z\2\6\q\p\s\m\n\t\1\g\f\y\6\a\t\z\x\4\t\h\2\g\4\7\y\r\q\4\3\3\5\1\4\z\8\c\b\g\3\f\m\w\s\2\z\0\e\e\k\p\l\2\e\q\h\a\z\j\z\b\8\u\b\t\n\3\s\k\i\s\9\v\3\c\y\h\y\q\4\d\o\3\a\1\o\6\e\e\6\6\5\6\m\z\m\e\i\w\a\d\g\9\a\4\t\9\s\b\h\e\4\5\b\q\h\i\4\3\b\u\g\9\7\u\9\1\1\3\b\a\0\r\6\9\f\n\b\3\q\b\8\0\b\0\5\n\j\5\9\3\k\c\a\z\9\n\m\i\b\w\4\p\i\b\6\f\7\w\a\a\6\4\e\8\e\l\s\w\6\p\1\x\g\3\h\x\8\9\r\k\c\t\p\o\o\i\x\c\f\z\v\1\j\h\7\x\s\g\1\0\z\v\w\t\h\0\6\1\r\d\t\1\l\0\g\v\q\5\q\b\z\e\y\l\j\m\5\j\3\6\r\t\t\s\2\t\8\n\w\c\y\0\n\r\h\e\z\h\q\7\4\7\8\k\k\8\7\r\m\w\6\c\2\d\r\n\4\p\t\u\a\a\d\1\o\1\h\n\1\y\4\0\a\s\h\v\r\5\c\d\k\f\0\n\c\b\4\m\j\9\t\q\7\9\n\b\m\q\1\9\c\c\v\j\o\4\r\g\k\9\p\p\k\b\8\5\o\w\z\w\h\r\p\3\d\b\3\n\1\z\b\s\z\i\w\m\e\1\t\e\b\n\q\d\4\p\q\5\u\5\i\n\i\o\r\o\m\v\v\9\a\k\y\m\k\b\a\3\y\q\k\s\x\e\l\s\w\q\j\9\i\7\f\w\w\2\y\f\5\2\x\k\4\6\w\x\f\5\3\k\4\c\0\7\j\x\4\3\h\d\1\u\n\3\6\l\x\k\w\w\6\h\i\1\g\i\0\l\k\8\y\p\t\1\d\9\p\4\h\3\7\d\j\z\d\z\h\v\o\y\y\8\d ]] 00:28:00.264 07:28:33 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:28:00.264 07:28:33 -- dd/posix.sh@86 -- # gen_bytes 512 00:28:00.264 07:28:33 -- dd/common.sh@98 -- # xtrace_disable 00:28:00.264 07:28:33 -- common/autotest_common.sh@10 -- # set +x 00:28:00.264 07:28:33 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:00.264 07:28:33 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:28:00.264 [2024-02-13 07:28:33.918267] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:00.264 [2024-02-13 07:28:33.918420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140900 ] 00:28:00.524 [2024-02-13 07:28:34.070084] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.782 [2024-02-13 07:28:34.251163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.978  Copying: 512/512 [B] (average 500 kBps) 00:28:01.978 00:28:01.978 07:28:35 -- dd/posix.sh@93 -- # [[ 1fgcuuc3jh91n2fmyfeaateqvh4kp38eq52ejv4vad6zpabgkmyk7o89b4eqcw0c2sbxhgkru7udjrof2pjbog5j0jw4xg3eiaurtoy9keo7kdtmrz0i1cqingva5steqmd4kcrxjec09z933o1tsjyiwb0rl5ap9dovchz84xizur1fe2qnzdm2ih9uq99o7fcx5nmz1mmag6480q64hqsmre5c8a32vu0xj4c9268do54vcw3yg5ztby4hrkmddnvkxallkf4ip46mjkonas9blki7i1wsrjabcp3a3rkbkz0vf9czuyj9wjovvynsydtvq273x9g0r52119t6jaosefil93fdyjh8wbduu59m33xh2173lq8q8a9511cb4ayci2752h9kmu1fxtfd6yu6lrz1c2z08upjgac7bshnboaewtcpa08rsgfngo8ai2l6xrray5f4lqzt8usa4brj9oybrka1yncpao2a87u3u9duhmsbltvlbtvgm1sl == \1\f\g\c\u\u\c\3\j\h\9\1\n\2\f\m\y\f\e\a\a\t\e\q\v\h\4\k\p\3\8\e\q\5\2\e\j\v\4\v\a\d\6\z\p\a\b\g\k\m\y\k\7\o\8\9\b\4\e\q\c\w\0\c\2\s\b\x\h\g\k\r\u\7\u\d\j\r\o\f\2\p\j\b\o\g\5\j\0\j\w\4\x\g\3\e\i\a\u\r\t\o\y\9\k\e\o\7\k\d\t\m\r\z\0\i\1\c\q\i\n\g\v\a\5\s\t\e\q\m\d\4\k\c\r\x\j\e\c\0\9\z\9\3\3\o\1\t\s\j\y\i\w\b\0\r\l\5\a\p\9\d\o\v\c\h\z\8\4\x\i\z\u\r\1\f\e\2\q\n\z\d\m\2\i\h\9\u\q\9\9\o\7\f\c\x\5\n\m\z\1\m\m\a\g\6\4\8\0\q\6\4\h\q\s\m\r\e\5\c\8\a\3\2\v\u\0\x\j\4\c\9\2\6\8\d\o\5\4\v\c\w\3\y\g\5\z\t\b\y\4\h\r\k\m\d\d\n\v\k\x\a\l\l\k\f\4\i\p\4\6\m\j\k\o\n\a\s\9\b\l\k\i\7\i\1\w\s\r\j\a\b\c\p\3\a\3\r\k\b\k\z\0\v\f\9\c\z\u\y\j\9\w\j\o\v\v\y\n\s\y\d\t\v\q\2\7\3\x\9\g\0\r\5\2\1\1\9\t\6\j\a\o\s\e\f\i\l\9\3\f\d\y\j\h\8\w\b\d\u\u\5\9\m\3\3\x\h\2\1\7\3\l\q\8\q\8\a\9\5\1\1\c\b\4\a\y\c\i\2\7\5\2\h\9\k\m\u\1\f\x\t\f\d\6\y\u\6\l\r\z\1\c\2\z\0\8\u\p\j\g\a\c\7\b\s\h\n\b\o\a\e\w\t\c\p\a\0\8\r\s\g\f\n\g\o\8\a\i\2\l\6\x\r\r\a\y\5\f\4\l\q\z\t\8\u\s\a\4\b\r\j\9\o\y\b\r\k\a\1\y\n\c\p\a\o\2\a\8\7\u\3\u\9\d\u\h\m\s\b\l\t\v\l\b\t\v\g\m\1\s\l ]] 00:28:01.978 07:28:35 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:01.978 07:28:35 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:28:01.978 [2024-02-13 07:28:35.628768] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:01.978 [2024-02-13 07:28:35.629000] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140924 ] 00:28:02.237 [2024-02-13 07:28:35.797349] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.496 [2024-02-13 07:28:35.972050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:03.693  Copying: 512/512 [B] (average 500 kBps) 00:28:03.693 00:28:03.693 07:28:37 -- dd/posix.sh@93 -- # [[ 1fgcuuc3jh91n2fmyfeaateqvh4kp38eq52ejv4vad6zpabgkmyk7o89b4eqcw0c2sbxhgkru7udjrof2pjbog5j0jw4xg3eiaurtoy9keo7kdtmrz0i1cqingva5steqmd4kcrxjec09z933o1tsjyiwb0rl5ap9dovchz84xizur1fe2qnzdm2ih9uq99o7fcx5nmz1mmag6480q64hqsmre5c8a32vu0xj4c9268do54vcw3yg5ztby4hrkmddnvkxallkf4ip46mjkonas9blki7i1wsrjabcp3a3rkbkz0vf9czuyj9wjovvynsydtvq273x9g0r52119t6jaosefil93fdyjh8wbduu59m33xh2173lq8q8a9511cb4ayci2752h9kmu1fxtfd6yu6lrz1c2z08upjgac7bshnboaewtcpa08rsgfngo8ai2l6xrray5f4lqzt8usa4brj9oybrka1yncpao2a87u3u9duhmsbltvlbtvgm1sl == \1\f\g\c\u\u\c\3\j\h\9\1\n\2\f\m\y\f\e\a\a\t\e\q\v\h\4\k\p\3\8\e\q\5\2\e\j\v\4\v\a\d\6\z\p\a\b\g\k\m\y\k\7\o\8\9\b\4\e\q\c\w\0\c\2\s\b\x\h\g\k\r\u\7\u\d\j\r\o\f\2\p\j\b\o\g\5\j\0\j\w\4\x\g\3\e\i\a\u\r\t\o\y\9\k\e\o\7\k\d\t\m\r\z\0\i\1\c\q\i\n\g\v\a\5\s\t\e\q\m\d\4\k\c\r\x\j\e\c\0\9\z\9\3\3\o\1\t\s\j\y\i\w\b\0\r\l\5\a\p\9\d\o\v\c\h\z\8\4\x\i\z\u\r\1\f\e\2\q\n\z\d\m\2\i\h\9\u\q\9\9\o\7\f\c\x\5\n\m\z\1\m\m\a\g\6\4\8\0\q\6\4\h\q\s\m\r\e\5\c\8\a\3\2\v\u\0\x\j\4\c\9\2\6\8\d\o\5\4\v\c\w\3\y\g\5\z\t\b\y\4\h\r\k\m\d\d\n\v\k\x\a\l\l\k\f\4\i\p\4\6\m\j\k\o\n\a\s\9\b\l\k\i\7\i\1\w\s\r\j\a\b\c\p\3\a\3\r\k\b\k\z\0\v\f\9\c\z\u\y\j\9\w\j\o\v\v\y\n\s\y\d\t\v\q\2\7\3\x\9\g\0\r\5\2\1\1\9\t\6\j\a\o\s\e\f\i\l\9\3\f\d\y\j\h\8\w\b\d\u\u\5\9\m\3\3\x\h\2\1\7\3\l\q\8\q\8\a\9\5\1\1\c\b\4\a\y\c\i\2\7\5\2\h\9\k\m\u\1\f\x\t\f\d\6\y\u\6\l\r\z\1\c\2\z\0\8\u\p\j\g\a\c\7\b\s\h\n\b\o\a\e\w\t\c\p\a\0\8\r\s\g\f\n\g\o\8\a\i\2\l\6\x\r\r\a\y\5\f\4\l\q\z\t\8\u\s\a\4\b\r\j\9\o\y\b\r\k\a\1\y\n\c\p\a\o\2\a\8\7\u\3\u\9\d\u\h\m\s\b\l\t\v\l\b\t\v\g\m\1\s\l ]] 00:28:03.693 07:28:37 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:03.694 07:28:37 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:28:03.694 [2024-02-13 07:28:37.362646] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:03.694 [2024-02-13 07:28:37.362888] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140948 ] 00:28:03.953 [2024-02-13 07:28:37.530341] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.212 [2024-02-13 07:28:37.711544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.422  Copying: 512/512 [B] (average 250 kBps) 00:28:05.422 00:28:05.422 07:28:39 -- dd/posix.sh@93 -- # [[ 1fgcuuc3jh91n2fmyfeaateqvh4kp38eq52ejv4vad6zpabgkmyk7o89b4eqcw0c2sbxhgkru7udjrof2pjbog5j0jw4xg3eiaurtoy9keo7kdtmrz0i1cqingva5steqmd4kcrxjec09z933o1tsjyiwb0rl5ap9dovchz84xizur1fe2qnzdm2ih9uq99o7fcx5nmz1mmag6480q64hqsmre5c8a32vu0xj4c9268do54vcw3yg5ztby4hrkmddnvkxallkf4ip46mjkonas9blki7i1wsrjabcp3a3rkbkz0vf9czuyj9wjovvynsydtvq273x9g0r52119t6jaosefil93fdyjh8wbduu59m33xh2173lq8q8a9511cb4ayci2752h9kmu1fxtfd6yu6lrz1c2z08upjgac7bshnboaewtcpa08rsgfngo8ai2l6xrray5f4lqzt8usa4brj9oybrka1yncpao2a87u3u9duhmsbltvlbtvgm1sl == \1\f\g\c\u\u\c\3\j\h\9\1\n\2\f\m\y\f\e\a\a\t\e\q\v\h\4\k\p\3\8\e\q\5\2\e\j\v\4\v\a\d\6\z\p\a\b\g\k\m\y\k\7\o\8\9\b\4\e\q\c\w\0\c\2\s\b\x\h\g\k\r\u\7\u\d\j\r\o\f\2\p\j\b\o\g\5\j\0\j\w\4\x\g\3\e\i\a\u\r\t\o\y\9\k\e\o\7\k\d\t\m\r\z\0\i\1\c\q\i\n\g\v\a\5\s\t\e\q\m\d\4\k\c\r\x\j\e\c\0\9\z\9\3\3\o\1\t\s\j\y\i\w\b\0\r\l\5\a\p\9\d\o\v\c\h\z\8\4\x\i\z\u\r\1\f\e\2\q\n\z\d\m\2\i\h\9\u\q\9\9\o\7\f\c\x\5\n\m\z\1\m\m\a\g\6\4\8\0\q\6\4\h\q\s\m\r\e\5\c\8\a\3\2\v\u\0\x\j\4\c\9\2\6\8\d\o\5\4\v\c\w\3\y\g\5\z\t\b\y\4\h\r\k\m\d\d\n\v\k\x\a\l\l\k\f\4\i\p\4\6\m\j\k\o\n\a\s\9\b\l\k\i\7\i\1\w\s\r\j\a\b\c\p\3\a\3\r\k\b\k\z\0\v\f\9\c\z\u\y\j\9\w\j\o\v\v\y\n\s\y\d\t\v\q\2\7\3\x\9\g\0\r\5\2\1\1\9\t\6\j\a\o\s\e\f\i\l\9\3\f\d\y\j\h\8\w\b\d\u\u\5\9\m\3\3\x\h\2\1\7\3\l\q\8\q\8\a\9\5\1\1\c\b\4\a\y\c\i\2\7\5\2\h\9\k\m\u\1\f\x\t\f\d\6\y\u\6\l\r\z\1\c\2\z\0\8\u\p\j\g\a\c\7\b\s\h\n\b\o\a\e\w\t\c\p\a\0\8\r\s\g\f\n\g\o\8\a\i\2\l\6\x\r\r\a\y\5\f\4\l\q\z\t\8\u\s\a\4\b\r\j\9\o\y\b\r\k\a\1\y\n\c\p\a\o\2\a\8\7\u\3\u\9\d\u\h\m\s\b\l\t\v\l\b\t\v\g\m\1\s\l ]] 00:28:05.422 07:28:39 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:28:05.422 07:28:39 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:28:05.680 [2024-02-13 07:28:39.122031] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:05.680 [2024-02-13 07:28:39.122245] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140985 ] 00:28:05.680 [2024-02-13 07:28:39.286992] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.939 [2024-02-13 07:28:39.466517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.134  Copying: 512/512 [B] (average 166 kBps) 00:28:07.134 00:28:07.134 ************************************ 00:28:07.134 END TEST dd_flags_misc_forced_aio 00:28:07.134 ************************************ 00:28:07.134 07:28:40 -- dd/posix.sh@93 -- # [[ 1fgcuuc3jh91n2fmyfeaateqvh4kp38eq52ejv4vad6zpabgkmyk7o89b4eqcw0c2sbxhgkru7udjrof2pjbog5j0jw4xg3eiaurtoy9keo7kdtmrz0i1cqingva5steqmd4kcrxjec09z933o1tsjyiwb0rl5ap9dovchz84xizur1fe2qnzdm2ih9uq99o7fcx5nmz1mmag6480q64hqsmre5c8a32vu0xj4c9268do54vcw3yg5ztby4hrkmddnvkxallkf4ip46mjkonas9blki7i1wsrjabcp3a3rkbkz0vf9czuyj9wjovvynsydtvq273x9g0r52119t6jaosefil93fdyjh8wbduu59m33xh2173lq8q8a9511cb4ayci2752h9kmu1fxtfd6yu6lrz1c2z08upjgac7bshnboaewtcpa08rsgfngo8ai2l6xrray5f4lqzt8usa4brj9oybrka1yncpao2a87u3u9duhmsbltvlbtvgm1sl == \1\f\g\c\u\u\c\3\j\h\9\1\n\2\f\m\y\f\e\a\a\t\e\q\v\h\4\k\p\3\8\e\q\5\2\e\j\v\4\v\a\d\6\z\p\a\b\g\k\m\y\k\7\o\8\9\b\4\e\q\c\w\0\c\2\s\b\x\h\g\k\r\u\7\u\d\j\r\o\f\2\p\j\b\o\g\5\j\0\j\w\4\x\g\3\e\i\a\u\r\t\o\y\9\k\e\o\7\k\d\t\m\r\z\0\i\1\c\q\i\n\g\v\a\5\s\t\e\q\m\d\4\k\c\r\x\j\e\c\0\9\z\9\3\3\o\1\t\s\j\y\i\w\b\0\r\l\5\a\p\9\d\o\v\c\h\z\8\4\x\i\z\u\r\1\f\e\2\q\n\z\d\m\2\i\h\9\u\q\9\9\o\7\f\c\x\5\n\m\z\1\m\m\a\g\6\4\8\0\q\6\4\h\q\s\m\r\e\5\c\8\a\3\2\v\u\0\x\j\4\c\9\2\6\8\d\o\5\4\v\c\w\3\y\g\5\z\t\b\y\4\h\r\k\m\d\d\n\v\k\x\a\l\l\k\f\4\i\p\4\6\m\j\k\o\n\a\s\9\b\l\k\i\7\i\1\w\s\r\j\a\b\c\p\3\a\3\r\k\b\k\z\0\v\f\9\c\z\u\y\j\9\w\j\o\v\v\y\n\s\y\d\t\v\q\2\7\3\x\9\g\0\r\5\2\1\1\9\t\6\j\a\o\s\e\f\i\l\9\3\f\d\y\j\h\8\w\b\d\u\u\5\9\m\3\3\x\h\2\1\7\3\l\q\8\q\8\a\9\5\1\1\c\b\4\a\y\c\i\2\7\5\2\h\9\k\m\u\1\f\x\t\f\d\6\y\u\6\l\r\z\1\c\2\z\0\8\u\p\j\g\a\c\7\b\s\h\n\b\o\a\e\w\t\c\p\a\0\8\r\s\g\f\n\g\o\8\a\i\2\l\6\x\r\r\a\y\5\f\4\l\q\z\t\8\u\s\a\4\b\r\j\9\o\y\b\r\k\a\1\y\n\c\p\a\o\2\a\8\7\u\3\u\9\d\u\h\m\s\b\l\t\v\l\b\t\v\g\m\1\s\l ]] 00:28:07.134 00:28:07.134 real 0m13.857s 00:28:07.134 user 0m10.650s 00:28:07.134 sys 0m2.117s 00:28:07.134 07:28:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:07.134 07:28:40 -- common/autotest_common.sh@10 -- # set +x 00:28:07.134 07:28:40 -- dd/posix.sh@1 -- # cleanup 00:28:07.134 07:28:40 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:28:07.134 07:28:40 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:28:07.394 00:28:07.394 real 0m57.989s 00:28:07.394 user 0m43.305s 00:28:07.394 sys 0m8.500s 00:28:07.394 07:28:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:07.394 07:28:40 -- common/autotest_common.sh@10 -- # set +x 00:28:07.394 ************************************ 00:28:07.394 END TEST spdk_dd_posix 00:28:07.394 ************************************ 00:28:07.394 07:28:40 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:28:07.394 07:28:40 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:07.394 07:28:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:07.394 07:28:40 -- common/autotest_common.sh@10 -- # set +x 00:28:07.394 ************************************ 00:28:07.394 START TEST spdk_dd_malloc 00:28:07.394 ************************************ 00:28:07.394 07:28:40 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:28:07.394 * Looking for test storage... 00:28:07.394 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:07.394 07:28:40 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:07.394 07:28:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.394 07:28:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.394 07:28:40 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:28:07.394 07:28:40 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:07.394 07:28:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:07.394 07:28:40 -- common/autotest_common.sh@10 -- # set +x 00:28:07.394 ************************************ 00:28:07.394 START TEST dd_malloc_copy 00:28:07.394 ************************************ 00:28:07.394 07:28:40 -- common/autotest_common.sh@1102 -- # malloc_copy 00:28:07.394 07:28:40 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:28:07.394 07:28:40 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:28:07.394 07:28:40 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(["name"]=$mbdev0 ["num_blocks"]=$mbdev0_b ["block_size"]=$mbdev0_bs) 00:28:07.394 07:28:40 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:28:07.394 07:28:40 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(["name"]=$mbdev1 ["num_blocks"]=$mbdev1_b ["block_size"]=$mbdev1_bs) 00:28:07.394 07:28:40 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:28:07.394 07:28:40 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:28:07.394 07:28:40 -- dd/malloc.sh@28 -- # gen_conf 00:28:07.394 07:28:40 -- dd/common.sh@31 -- # xtrace_disable 00:28:07.394 07:28:40 -- common/autotest_common.sh@10 -- # set +x 00:28:07.394 { 00:28:07.394 "subsystems": [ 00:28:07.394 { 00:28:07.394 "subsystem": "bdev", 00:28:07.394 "config": [ 00:28:07.394 { 00:28:07.394 "params": { 00:28:07.394 "num_blocks": 1048576, 00:28:07.394 "block_size": 512, 00:28:07.394 "name": "malloc0" 00:28:07.394 }, 00:28:07.394 "method": "bdev_malloc_create" 00:28:07.394 }, 00:28:07.394 { 00:28:07.394 "params": { 00:28:07.394 "num_blocks": 1048576, 00:28:07.394 "block_size": 512, 00:28:07.394 "name": "malloc1" 00:28:07.394 }, 00:28:07.394 "method": "bdev_malloc_create" 00:28:07.394 }, 00:28:07.394 { 00:28:07.394 "method": "bdev_wait_for_examine" 00:28:07.394 } 00:28:07.394 ] 00:28:07.394 } 00:28:07.394 ] 00:28:07.394 } 00:28:07.394 [2024-02-13 07:28:41.039130] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:07.394 [2024-02-13 07:28:41.039338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141074 ] 00:28:07.654 [2024-02-13 07:28:41.208773] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:07.918 [2024-02-13 07:28:41.391635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.918 [2024-02-13 07:28:41.391769] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:28:11.292  Copying: 223/512 [MB] (223 MBps) Copying: 448/512 [MB] (225 MBps) Copying: 512/512 [MB] (average 224 MBps)[2024-02-13 07:28:44.885081] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:28:14.581 00:28:14.581 00:28:14.581 07:28:47 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:28:14.581 07:28:47 -- dd/malloc.sh@33 -- # gen_conf 00:28:14.581 07:28:47 -- dd/common.sh@31 -- # xtrace_disable 00:28:14.581 07:28:47 -- common/autotest_common.sh@10 -- # set +x 00:28:14.581 { 00:28:14.581 "subsystems": [ 00:28:14.581 { 00:28:14.581 "subsystem": "bdev", 00:28:14.581 "config": [ 00:28:14.581 { 00:28:14.581 "params": { 00:28:14.581 "num_blocks": 1048576, 00:28:14.581 "block_size": 512, 00:28:14.581 "name": "malloc0" 00:28:14.581 }, 00:28:14.581 "method": "bdev_malloc_create" 00:28:14.581 }, 00:28:14.581 { 00:28:14.581 "params": { 00:28:14.581 "num_blocks": 1048576, 00:28:14.581 "block_size": 512, 00:28:14.581 "name": "malloc1" 00:28:14.581 }, 00:28:14.581 "method": "bdev_malloc_create" 00:28:14.581 }, 00:28:14.581 { 00:28:14.581 "method": "bdev_wait_for_examine" 00:28:14.581 } 00:28:14.581 ] 00:28:14.581 } 00:28:14.581 ] 00:28:14.581 } 00:28:14.581 [2024-02-13 07:28:48.053252] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:14.581 [2024-02-13 07:28:48.054208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141178 ] 00:28:14.581 [2024-02-13 07:28:48.217576] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.840 [2024-02-13 07:28:48.395206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.840 [2024-02-13 07:28:48.395353] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:28:18.201  Copying: 229/512 [MB] (229 MBps) Copying: 457/512 [MB] (227 MBps) Copying: 512/512 [MB] (average 228 MBps)[2024-02-13 07:28:51.834555] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:28:21.541 00:28:21.541 00:28:21.541 00:28:21.541 real 0m13.978s 00:28:21.541 user 0m12.610s 00:28:21.541 sys 0m1.227s 00:28:21.541 ************************************ 00:28:21.541 END TEST dd_malloc_copy 00:28:21.541 07:28:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:21.541 07:28:54 -- common/autotest_common.sh@10 -- # set +x 00:28:21.541 ************************************ 00:28:21.541 00:28:21.541 real 0m14.110s 00:28:21.541 user 0m12.697s 00:28:21.541 sys 0m1.275s 00:28:21.541 07:28:54 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:21.541 07:28:54 -- common/autotest_common.sh@10 -- # set +x 00:28:21.541 ************************************ 00:28:21.541 END TEST spdk_dd_malloc 00:28:21.541 ************************************ 00:28:21.541 07:28:55 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:28:21.541 07:28:55 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:28:21.541 07:28:55 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:21.541 07:28:55 -- common/autotest_common.sh@10 -- # set +x 00:28:21.541 ************************************ 00:28:21.541 START TEST spdk_dd_bdev_to_bdev 00:28:21.541 ************************************ 00:28:21.541 07:28:55 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 00:28:21.541 * Looking for test storage... 00:28:21.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:21.541 07:28:55 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:21.541 07:28:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.541 07:28:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.541 07:28:55 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:28:21.541 07:28:55 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:28:21.541 07:28:55 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:28:21.541 07:28:55 -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 )) 00:28:21.541 07:28:55 -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0 00:28:21.541 07:28:55 -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1 00:28:21.541 07:28:55 -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:06.0 00:28:21.541 07:28:55 -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1 00:28:21.541 07:28:55 -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1 00:28:21.541 07:28:55 -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(["name"]=$nvme0 ["traddr"]=$nvme0_pci ["trtype"]=pcie) 00:28:21.541 07:28:55 -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1 00:28:21.541 07:28:55 -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(["name"]=$bdev1 ["filename"]=$aio1 ["block_size"]=4096) 00:28:21.541 07:28:55 -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0 00:28:21.541 07:28:55 -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256 00:28:21.541 [2024-02-13 07:28:55.184787] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:21.541 [2024-02-13 07:28:55.185180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141332 ] 00:28:21.800 [2024-02-13 07:28:55.347539] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.060 [2024-02-13 07:28:55.525750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.697  Copying: 256/256 [MB] (average 1319 MBps) 00:28:23.697 00:28:23.697 07:28:57 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:23.698 07:28:57 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:23.698 07:28:57 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:28:23.698 07:28:57 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:28:23.698 07:28:57 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:28:23.698 07:28:57 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:28:23.698 07:28:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:23.698 07:28:57 -- common/autotest_common.sh@10 -- # set +x 00:28:23.698 ************************************ 00:28:23.698 START TEST dd_inflate_file 00:28:23.698 ************************************ 00:28:23.698 07:28:57 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:28:23.698 [2024-02-13 07:28:57.142919] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:23.698 [2024-02-13 07:28:57.143270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141362 ] 00:28:23.698 [2024-02-13 07:28:57.296629] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.957 [2024-02-13 07:28:57.492001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.593  Copying: 64/64 [MB] (average 1306 MBps) 00:28:25.593 00:28:25.593 ************************************ 00:28:25.593 END TEST dd_inflate_file 00:28:25.593 ************************************ 00:28:25.593 00:28:25.593 real 0m1.795s 00:28:25.593 user 0m1.355s 00:28:25.593 sys 0m0.305s 00:28:25.593 07:28:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:25.593 07:28:58 -- common/autotest_common.sh@10 -- # set +x 00:28:25.593 07:28:58 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:28:25.593 07:28:58 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:28:25.593 07:28:58 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:28:25.593 07:28:58 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:28:25.593 07:28:58 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:28:25.593 07:28:58 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:25.593 07:28:58 -- common/autotest_common.sh@10 -- # set +x 00:28:25.593 07:28:58 -- dd/common.sh@31 -- # xtrace_disable 00:28:25.593 07:28:58 -- common/autotest_common.sh@10 -- # set +x 00:28:25.593 ************************************ 00:28:25.593 START TEST dd_copy_to_out_bdev 00:28:25.593 ************************************ 00:28:25.593 07:28:58 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:28:25.593 { 00:28:25.593 "subsystems": [ 00:28:25.593 { 00:28:25.593 "subsystem": "bdev", 00:28:25.593 "config": [ 00:28:25.593 { 00:28:25.593 "params": { 00:28:25.593 "block_size": 4096, 00:28:25.593 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:25.593 "name": "aio1" 00:28:25.593 }, 00:28:25.593 "method": "bdev_aio_create" 00:28:25.593 }, 00:28:25.593 { 00:28:25.593 "params": { 00:28:25.593 "trtype": "pcie", 00:28:25.593 "traddr": "0000:00:06.0", 00:28:25.593 "name": "Nvme0" 00:28:25.593 }, 00:28:25.593 "method": "bdev_nvme_attach_controller" 00:28:25.593 }, 00:28:25.593 { 00:28:25.593 "method": "bdev_wait_for_examine" 00:28:25.593 } 00:28:25.593 ] 00:28:25.593 } 00:28:25.593 ] 00:28:25.593 } 00:28:25.593 [2024-02-13 07:28:59.011331] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:25.593 [2024-02-13 07:28:59.011731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141446 ] 00:28:25.593 [2024-02-13 07:28:59.180902] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.852 [2024-02-13 07:28:59.360215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.852 [2024-02-13 07:28:59.360627] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:28:27.489  Copying: 48/64 [MB] (48 MBps) Copying: 64/64 [MB] (average 48 MBps)[2024-02-13 07:29:01.073519] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:28:28.866 00:28:28.866 00:28:28.866 ************************************ 00:28:28.866 END TEST dd_copy_to_out_bdev 00:28:28.866 ************************************ 00:28:28.866 00:28:28.866 real 0m3.218s 00:28:28.866 user 0m2.746s 00:28:28.866 sys 0m0.370s 00:28:28.866 07:29:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:28.866 07:29:02 -- common/autotest_common.sh@10 -- # set +x 00:28:28.866 07:29:02 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:28:28.866 07:29:02 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:28:28.866 07:29:02 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:28.866 07:29:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:28.866 07:29:02 -- common/autotest_common.sh@10 -- # set +x 00:28:28.866 ************************************ 00:28:28.866 START TEST dd_offset_magic 00:28:28.866 ************************************ 00:28:28.866 07:29:02 -- common/autotest_common.sh@1102 -- # offset_magic 00:28:28.866 07:29:02 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:28:28.866 07:29:02 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:28:28.866 07:29:02 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:28:28.866 07:29:02 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:28:28.866 07:29:02 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:28:28.866 07:29:02 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:28:28.866 07:29:02 -- dd/common.sh@31 -- # xtrace_disable 00:28:28.866 07:29:02 -- common/autotest_common.sh@10 -- # set +x 00:28:28.866 { 00:28:28.866 "subsystems": [ 00:28:28.866 { 00:28:28.866 "subsystem": "bdev", 00:28:28.866 "config": [ 00:28:28.866 { 00:28:28.866 "params": { 00:28:28.866 "block_size": 4096, 00:28:28.866 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:28.866 "name": "aio1" 00:28:28.866 }, 00:28:28.866 "method": "bdev_aio_create" 00:28:28.866 }, 00:28:28.867 { 00:28:28.867 "params": { 00:28:28.867 "trtype": "pcie", 00:28:28.867 "traddr": "0000:00:06.0", 00:28:28.867 "name": "Nvme0" 00:28:28.867 }, 00:28:28.867 "method": "bdev_nvme_attach_controller" 00:28:28.867 }, 00:28:28.867 { 00:28:28.867 "method": "bdev_wait_for_examine" 00:28:28.867 } 00:28:28.867 ] 00:28:28.867 } 00:28:28.867 ] 00:28:28.867 } 00:28:28.867 [2024-02-13 07:29:02.293226] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:28.867 [2024-02-13 07:29:02.293647] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141504 ] 00:28:28.867 [2024-02-13 07:29:02.460410] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.126 [2024-02-13 07:29:02.650040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:29.126 [2024-02-13 07:29:02.650413] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:28:29.694  Copying: 65/65 [MB] (average 270 MBps)[2024-02-13 07:29:03.288157] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:28:30.631 00:28:30.631 00:28:30.891 07:29:04 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:28:30.891 07:29:04 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:28:30.891 07:29:04 -- dd/common.sh@31 -- # xtrace_disable 00:28:30.891 07:29:04 -- common/autotest_common.sh@10 -- # set +x 00:28:30.891 [2024-02-13 07:29:04.399655] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:30.891 [2024-02-13 07:29:04.400205] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141542 ] 00:28:30.891 { 00:28:30.891 "subsystems": [ 00:28:30.891 { 00:28:30.891 "subsystem": "bdev", 00:28:30.891 "config": [ 00:28:30.891 { 00:28:30.891 "params": { 00:28:30.891 "block_size": 4096, 00:28:30.891 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:30.891 "name": "aio1" 00:28:30.891 }, 00:28:30.891 "method": "bdev_aio_create" 00:28:30.891 }, 00:28:30.891 { 00:28:30.891 "params": { 00:28:30.891 "trtype": "pcie", 00:28:30.891 "traddr": "0000:00:06.0", 00:28:30.891 "name": "Nvme0" 00:28:30.891 }, 00:28:30.891 "method": "bdev_nvme_attach_controller" 00:28:30.891 }, 00:28:30.891 { 00:28:30.891 "method": "bdev_wait_for_examine" 00:28:30.891 } 00:28:30.891 ] 00:28:30.891 } 00:28:30.891 ] 00:28:30.891 } 00:28:30.891 [2024-02-13 07:29:04.566150] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.150 [2024-02-13 07:29:04.746381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.150 [2024-02-13 07:29:04.746773] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:28:31.719  Copying: 1024/1024 [kB] (average 500 MBps)[2024-02-13 07:29:05.147388] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:28:32.656 00:28:32.656 00:28:32.656 07:29:06 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:28:32.656 07:29:06 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:28:32.656 07:29:06 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:28:32.656 07:29:06 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:28:32.656 07:29:06 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:28:32.656 07:29:06 -- dd/common.sh@31 -- # xtrace_disable 00:28:32.656 07:29:06 -- common/autotest_common.sh@10 -- # set +x 00:28:32.656 [2024-02-13 07:29:06.282272] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:32.656 [2024-02-13 07:29:06.283417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141575 ] 00:28:32.656 { 00:28:32.656 "subsystems": [ 00:28:32.656 { 00:28:32.656 "subsystem": "bdev", 00:28:32.656 "config": [ 00:28:32.656 { 00:28:32.656 "params": { 00:28:32.656 "block_size": 4096, 00:28:32.656 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:32.656 "name": "aio1" 00:28:32.656 }, 00:28:32.656 "method": "bdev_aio_create" 00:28:32.656 }, 00:28:32.656 { 00:28:32.656 "params": { 00:28:32.656 "trtype": "pcie", 00:28:32.656 "traddr": "0000:00:06.0", 00:28:32.656 "name": "Nvme0" 00:28:32.656 }, 00:28:32.656 "method": "bdev_nvme_attach_controller" 00:28:32.656 }, 00:28:32.656 { 00:28:32.656 "method": "bdev_wait_for_examine" 00:28:32.656 } 00:28:32.656 ] 00:28:32.656 } 00:28:32.656 ] 00:28:32.656 } 00:28:32.915 [2024-02-13 07:29:06.449111] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.174 [2024-02-13 07:29:06.625356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.174 [2024-02-13 07:29:06.625776] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:28:33.743  Copying: 65/65 [MB] (average 382 MBps)[2024-02-13 07:29:07.186298] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:28:34.706 00:28:34.706 00:28:34.706 07:29:08 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:28:34.706 07:29:08 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:28:34.706 07:29:08 -- dd/common.sh@31 -- # xtrace_disable 00:28:34.706 07:29:08 -- common/autotest_common.sh@10 -- # set +x 00:28:34.706 [2024-02-13 07:29:08.303553] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:34.706 [2024-02-13 07:29:08.304918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141605 ] 00:28:34.706 { 00:28:34.706 "subsystems": [ 00:28:34.706 { 00:28:34.706 "subsystem": "bdev", 00:28:34.706 "config": [ 00:28:34.706 { 00:28:34.706 "params": { 00:28:34.706 "block_size": 4096, 00:28:34.706 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:34.706 "name": "aio1" 00:28:34.706 }, 00:28:34.706 "method": "bdev_aio_create" 00:28:34.706 }, 00:28:34.706 { 00:28:34.706 "params": { 00:28:34.706 "trtype": "pcie", 00:28:34.707 "traddr": "0000:00:06.0", 00:28:34.707 "name": "Nvme0" 00:28:34.707 }, 00:28:34.707 "method": "bdev_nvme_attach_controller" 00:28:34.707 }, 00:28:34.707 { 00:28:34.707 "method": "bdev_wait_for_examine" 00:28:34.707 } 00:28:34.707 ] 00:28:34.707 } 00:28:34.707 ] 00:28:34.707 } 00:28:34.969 [2024-02-13 07:29:08.470785] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.228 [2024-02-13 07:29:08.665195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.228 [2024-02-13 07:29:08.665640] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:28:35.487  Copying: 1024/1024 [kB] (average 500 MBps)[2024-02-13 07:29:09.067730] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:28:36.864 00:28:36.864 00:28:36.864 ************************************ 00:28:36.864 END TEST dd_offset_magic 00:28:36.864 ************************************ 00:28:36.864 07:29:10 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:28:36.864 07:29:10 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:28:36.864 00:28:36.864 real 0m7.917s 00:28:36.864 user 0m5.980s 00:28:36.864 sys 0m1.200s 00:28:36.864 07:29:10 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:36.864 07:29:10 -- common/autotest_common.sh@10 -- # set +x 00:28:36.864 07:29:10 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:28:36.864 07:29:10 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:28:36.864 07:29:10 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:28:36.864 07:29:10 -- dd/common.sh@11 -- # local nvme_ref= 00:28:36.864 07:29:10 -- dd/common.sh@12 -- # local size=4194330 00:28:36.864 07:29:10 -- dd/common.sh@14 -- # local bs=1048576 00:28:36.864 07:29:10 -- dd/common.sh@15 -- # local count=5 00:28:36.864 07:29:10 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:28:36.864 07:29:10 -- dd/common.sh@18 -- # gen_conf 00:28:36.864 07:29:10 -- dd/common.sh@31 -- # xtrace_disable 00:28:36.864 07:29:10 -- common/autotest_common.sh@10 -- # set +x 00:28:36.864 [2024-02-13 07:29:10.248384] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:36.864 [2024-02-13 07:29:10.249024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141675 ] 00:28:36.864 { 00:28:36.864 "subsystems": [ 00:28:36.864 { 00:28:36.864 "subsystem": "bdev", 00:28:36.864 "config": [ 00:28:36.864 { 00:28:36.864 "params": { 00:28:36.864 "block_size": 4096, 00:28:36.864 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:36.864 "name": "aio1" 00:28:36.864 }, 00:28:36.864 "method": "bdev_aio_create" 00:28:36.864 }, 00:28:36.864 { 00:28:36.864 "params": { 00:28:36.864 "trtype": "pcie", 00:28:36.864 "traddr": "0000:00:06.0", 00:28:36.864 "name": "Nvme0" 00:28:36.864 }, 00:28:36.864 "method": "bdev_nvme_attach_controller" 00:28:36.864 }, 00:28:36.864 { 00:28:36.864 "method": "bdev_wait_for_examine" 00:28:36.864 } 00:28:36.864 ] 00:28:36.864 } 00:28:36.864 ] 00:28:36.864 } 00:28:36.864 [2024-02-13 07:29:10.416952] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.122 [2024-02-13 07:29:10.592188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.122 [2024-02-13 07:29:10.592612] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:28:37.381  Copying: 5120/5120 [kB] (average 1250 MBps)[2024-02-13 07:29:10.999320] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:28:38.757 00:28:38.757 00:28:38.757 07:29:12 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330 00:28:38.757 07:29:12 -- dd/common.sh@10 -- # local bdev=aio1 00:28:38.757 07:29:12 -- dd/common.sh@11 -- # local nvme_ref= 00:28:38.757 07:29:12 -- dd/common.sh@12 -- # local size=4194330 00:28:38.757 07:29:12 -- dd/common.sh@14 -- # local bs=1048576 00:28:38.757 07:29:12 -- dd/common.sh@15 -- # local count=5 00:28:38.757 07:29:12 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62 00:28:38.757 07:29:12 -- dd/common.sh@18 -- # gen_conf 00:28:38.757 07:29:12 -- dd/common.sh@31 -- # xtrace_disable 00:28:38.757 07:29:12 -- common/autotest_common.sh@10 -- # set +x 00:28:38.757 { 00:28:38.757 "subsystems": [ 00:28:38.757 { 00:28:38.757 "subsystem": "bdev", 00:28:38.757 "config": [ 00:28:38.757 { 00:28:38.757 "params": { 00:28:38.757 "block_size": 4096, 00:28:38.757 "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1", 00:28:38.757 "name": "aio1" 00:28:38.757 }, 00:28:38.757 "method": "bdev_aio_create" 00:28:38.757 }, 00:28:38.757 { 00:28:38.757 "params": { 00:28:38.757 "trtype": "pcie", 00:28:38.757 "traddr": "0000:00:06.0", 00:28:38.757 "name": "Nvme0" 00:28:38.757 }, 00:28:38.757 "method": "bdev_nvme_attach_controller" 00:28:38.757 }, 00:28:38.757 { 00:28:38.757 "method": "bdev_wait_for_examine" 00:28:38.757 } 00:28:38.757 ] 00:28:38.757 } 00:28:38.757 ] 00:28:38.757 } 00:28:38.757 [2024-02-13 07:29:12.117869] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:38.757 [2024-02-13 07:29:12.119016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141704 ] 00:28:38.757 [2024-02-13 07:29:12.284954] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.016 [2024-02-13 07:29:12.462985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.016 [2024-02-13 07:29:12.463403] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:28:39.274  Copying: 5120/5120 [kB] (average 294 MBps)[2024-02-13 07:29:12.872238] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:28:40.652 00:28:40.652 00:28:40.652 07:29:13 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1 00:28:40.652 ************************************ 00:28:40.652 END TEST spdk_dd_bdev_to_bdev 00:28:40.652 ************************************ 00:28:40.652 00:28:40.652 real 0m18.958s 00:28:40.652 user 0m14.455s 00:28:40.652 sys 0m3.103s 00:28:40.652 07:29:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:40.652 07:29:13 -- common/autotest_common.sh@10 -- # set +x 00:28:40.652 07:29:14 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:28:40.652 07:29:14 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:28:40.652 07:29:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:40.652 07:29:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:40.652 07:29:14 -- common/autotest_common.sh@10 -- # set +x 00:28:40.652 ************************************ 00:28:40.652 START TEST spdk_dd_sparse 00:28:40.652 ************************************ 00:28:40.652 07:29:14 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:28:40.652 * Looking for test storage... 00:28:40.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:40.652 07:29:14 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:40.652 07:29:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:40.652 07:29:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.652 07:29:14 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:28:40.652 07:29:14 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:28:40.652 07:29:14 -- dd/sparse.sh@110 -- # file1=file_zero1 00:28:40.652 07:29:14 -- dd/sparse.sh@111 -- # file2=file_zero2 00:28:40.652 07:29:14 -- dd/sparse.sh@112 -- # file3=file_zero3 00:28:40.652 07:29:14 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:28:40.652 07:29:14 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:28:40.652 07:29:14 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:28:40.652 07:29:14 -- dd/sparse.sh@118 -- # prepare 00:28:40.652 07:29:14 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:28:40.652 07:29:14 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:28:40.652 1+0 records in 00:28:40.652 1+0 records out 00:28:40.652 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00961358 s, 436 MB/s 00:28:40.652 07:29:14 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:28:40.652 1+0 records in 00:28:40.652 1+0 records out 00:28:40.652 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00858724 s, 488 MB/s 00:28:40.652 07:29:14 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:28:40.652 1+0 records in 00:28:40.652 1+0 records out 00:28:40.652 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00989106 s, 424 MB/s 00:28:40.652 07:29:14 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:28:40.652 07:29:14 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:40.652 07:29:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:40.652 07:29:14 -- common/autotest_common.sh@10 -- # set +x 00:28:40.652 ************************************ 00:28:40.652 START TEST dd_sparse_file_to_file 00:28:40.652 ************************************ 00:28:40.652 07:29:14 -- common/autotest_common.sh@1102 -- # file_to_file 00:28:40.652 07:29:14 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:28:40.652 07:29:14 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:28:40.652 07:29:14 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:28:40.652 07:29:14 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:28:40.652 07:29:14 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(["bdev_name"]=$aio_bdev ["lvs_name"]=$lvstore) 00:28:40.652 07:29:14 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:28:40.652 07:29:14 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:28:40.652 07:29:14 -- dd/sparse.sh@41 -- # gen_conf 00:28:40.652 07:29:14 -- dd/common.sh@31 -- # xtrace_disable 00:28:40.652 07:29:14 -- common/autotest_common.sh@10 -- # set +x 00:28:40.652 [2024-02-13 07:29:14.260883] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:40.652 [2024-02-13 07:29:14.261364] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141790 ] 00:28:40.652 { 00:28:40.652 "subsystems": [ 00:28:40.652 { 00:28:40.652 "subsystem": "bdev", 00:28:40.652 "config": [ 00:28:40.652 { 00:28:40.652 "params": { 00:28:40.652 "block_size": 4096, 00:28:40.652 "filename": "dd_sparse_aio_disk", 00:28:40.652 "name": "dd_aio" 00:28:40.652 }, 00:28:40.652 "method": "bdev_aio_create" 00:28:40.652 }, 00:28:40.652 { 00:28:40.652 "params": { 00:28:40.652 "lvs_name": "dd_lvstore", 00:28:40.652 "bdev_name": "dd_aio" 00:28:40.652 }, 00:28:40.652 "method": "bdev_lvol_create_lvstore" 00:28:40.652 }, 00:28:40.652 { 00:28:40.652 "method": "bdev_wait_for_examine" 00:28:40.652 } 00:28:40.652 ] 00:28:40.652 } 00:28:40.652 ] 00:28:40.652 } 00:28:40.911 [2024-02-13 07:29:14.429055] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:40.911 [2024-02-13 07:29:14.606202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:40.911 [2024-02-13 07:29:14.606595] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:28:41.479  Copying: 12/36 [MB] (average 1000 MBps)[2024-02-13 07:29:14.989081] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:28:42.856 00:28:42.856 00:28:42.856 07:29:16 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:28:42.856 07:29:16 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:28:42.856 07:29:16 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:28:42.856 07:29:16 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:28:42.856 07:29:16 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:28:42.856 07:29:16 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:28:42.856 07:29:16 -- dd/sparse.sh@52 -- # stat1_b=24576 00:28:42.856 07:29:16 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:28:42.856 ************************************ 00:28:42.856 END TEST dd_sparse_file_to_file 00:28:42.856 ************************************ 00:28:42.856 07:29:16 -- dd/sparse.sh@53 -- # stat2_b=24576 00:28:42.856 07:29:16 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:28:42.856 00:28:42.856 real 0m1.958s 00:28:42.856 user 0m1.482s 00:28:42.856 sys 0m0.337s 00:28:42.856 07:29:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:42.856 07:29:16 -- common/autotest_common.sh@10 -- # set +x 00:28:42.856 07:29:16 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:28:42.856 07:29:16 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:42.856 07:29:16 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:42.856 07:29:16 -- common/autotest_common.sh@10 -- # set +x 00:28:42.856 ************************************ 00:28:42.856 START TEST dd_sparse_file_to_bdev 00:28:42.856 ************************************ 00:28:42.856 07:29:16 -- common/autotest_common.sh@1102 -- # file_to_bdev 00:28:42.856 07:29:16 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:28:42.856 07:29:16 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:28:42.856 07:29:16 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(["lvs_name"]=$lvstore ["lvol_name"]=$lvol ["size"]=37748736 ["thin_provision"]=true) 00:28:42.856 07:29:16 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:28:42.856 07:29:16 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:28:42.856 07:29:16 -- dd/sparse.sh@73 -- # gen_conf 00:28:42.856 07:29:16 -- dd/common.sh@31 -- # xtrace_disable 00:28:42.856 07:29:16 -- common/autotest_common.sh@10 -- # set +x 00:28:42.856 [2024-02-13 07:29:16.269265] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:42.856 [2024-02-13 07:29:16.269684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141856 ] 00:28:42.856 { 00:28:42.856 "subsystems": [ 00:28:42.856 { 00:28:42.856 "subsystem": "bdev", 00:28:42.856 "config": [ 00:28:42.856 { 00:28:42.856 "params": { 00:28:42.856 "block_size": 4096, 00:28:42.856 "filename": "dd_sparse_aio_disk", 00:28:42.856 "name": "dd_aio" 00:28:42.856 }, 00:28:42.857 "method": "bdev_aio_create" 00:28:42.857 }, 00:28:42.857 { 00:28:42.857 "params": { 00:28:42.857 "lvs_name": "dd_lvstore", 00:28:42.857 "thin_provision": true, 00:28:42.857 "lvol_name": "dd_lvol", 00:28:42.857 "size": 37748736 00:28:42.857 }, 00:28:42.857 "method": "bdev_lvol_create" 00:28:42.857 }, 00:28:42.857 { 00:28:42.857 "method": "bdev_wait_for_examine" 00:28:42.857 } 00:28:42.857 ] 00:28:42.857 } 00:28:42.857 ] 00:28:42.857 } 00:28:42.857 [2024-02-13 07:29:16.437632] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.116 [2024-02-13 07:29:16.615591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.116 [2024-02-13 07:29:16.615997] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:28:43.375 [2024-02-13 07:29:16.910558] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:28:43.375  Copying: 12/36 [MB] (average 500 MBps)[2024-02-13 07:29:16.972621] app.c: 881:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:28:43.375 [2024-02-13 07:29:16.972808] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:28:44.750 00:28:44.750 00:28:44.750 ************************************ 00:28:44.750 END TEST dd_sparse_file_to_bdev 00:28:44.750 ************************************ 00:28:44.750 00:28:44.750 real 0m1.910s 00:28:44.750 user 0m1.483s 00:28:44.750 sys 0m0.332s 00:28:44.750 07:29:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:44.750 07:29:18 -- common/autotest_common.sh@10 -- # set +x 00:28:44.750 07:29:18 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:28:44.750 07:29:18 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:44.750 07:29:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:44.750 07:29:18 -- common/autotest_common.sh@10 -- # set +x 00:28:44.750 ************************************ 00:28:44.750 START TEST dd_sparse_bdev_to_file 00:28:44.750 ************************************ 00:28:44.750 07:29:18 -- common/autotest_common.sh@1102 -- # bdev_to_file 00:28:44.750 07:29:18 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:28:44.750 07:29:18 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:28:44.750 07:29:18 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(["filename"]=$aio_disk ["name"]=$aio_bdev ["block_size"]=4096) 00:28:44.750 07:29:18 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:28:44.750 07:29:18 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:28:44.750 07:29:18 -- dd/sparse.sh@91 -- # gen_conf 00:28:44.750 07:29:18 -- dd/common.sh@31 -- # xtrace_disable 00:28:44.750 07:29:18 -- common/autotest_common.sh@10 -- # set +x 00:28:44.750 { 00:28:44.750 "subsystems": [ 00:28:44.750 { 00:28:44.750 "subsystem": "bdev", 00:28:44.750 "config": [ 00:28:44.750 { 00:28:44.750 "params": { 00:28:44.750 "block_size": 4096, 00:28:44.750 "filename": "dd_sparse_aio_disk", 00:28:44.750 "name": "dd_aio" 00:28:44.750 }, 00:28:44.750 "method": "bdev_aio_create" 00:28:44.750 }, 00:28:44.750 { 00:28:44.750 "method": "bdev_wait_for_examine" 00:28:44.750 } 00:28:44.750 ] 00:28:44.750 } 00:28:44.750 ] 00:28:44.750 } 00:28:44.750 [2024-02-13 07:29:18.238746] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:44.750 [2024-02-13 07:29:18.239097] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid141907 ] 00:28:44.750 [2024-02-13 07:29:18.405911] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.008 [2024-02-13 07:29:18.584744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.008 [2024-02-13 07:29:18.585148] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:28:45.266  Copying: 12/36 [MB] (average 857 MBps)[2024-02-13 07:29:18.931458] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:28:46.640 00:28:46.640 00:28:46.640 07:29:20 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:28:46.640 07:29:20 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:28:46.640 07:29:20 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:28:46.640 07:29:20 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:28:46.640 07:29:20 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:28:46.640 07:29:20 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:28:46.640 07:29:20 -- dd/sparse.sh@102 -- # stat2_b=24576 00:28:46.640 07:29:20 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:28:46.640 ************************************ 00:28:46.640 END TEST dd_sparse_bdev_to_file 00:28:46.640 ************************************ 00:28:46.640 07:29:20 -- dd/sparse.sh@103 -- # stat3_b=24576 00:28:46.640 07:29:20 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:28:46.640 00:28:46.640 real 0m1.932s 00:28:46.640 user 0m1.530s 00:28:46.640 sys 0m0.292s 00:28:46.640 07:29:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:46.640 07:29:20 -- common/autotest_common.sh@10 -- # set +x 00:28:46.640 07:29:20 -- dd/sparse.sh@1 -- # cleanup 00:28:46.640 07:29:20 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:28:46.640 07:29:20 -- dd/sparse.sh@12 -- # rm file_zero1 00:28:46.640 07:29:20 -- dd/sparse.sh@13 -- # rm file_zero2 00:28:46.640 07:29:20 -- dd/sparse.sh@14 -- # rm file_zero3 00:28:46.640 ************************************ 00:28:46.640 END TEST spdk_dd_sparse 00:28:46.640 ************************************ 00:28:46.640 00:28:46.640 real 0m6.117s 00:28:46.640 user 0m4.638s 00:28:46.640 sys 0m1.113s 00:28:46.640 07:29:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:46.640 07:29:20 -- common/autotest_common.sh@10 -- # set +x 00:28:46.640 07:29:20 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:28:46.640 07:29:20 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:46.640 07:29:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:46.640 07:29:20 -- common/autotest_common.sh@10 -- # set +x 00:28:46.640 ************************************ 00:28:46.640 START TEST spdk_dd_negative 00:28:46.640 ************************************ 00:28:46.640 07:29:20 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:28:46.640 * Looking for test storage... 00:28:46.640 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:28:46.640 07:29:20 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:46.640 07:29:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.640 07:29:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.640 07:29:20 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:46.640 07:29:20 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:46.640 07:29:20 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:46.640 07:29:20 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:28:46.640 07:29:20 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:28:46.640 07:29:20 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:46.640 07:29:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:46.640 07:29:20 -- common/autotest_common.sh@10 -- # set +x 00:28:46.640 ************************************ 00:28:46.640 START TEST dd_invalid_arguments 00:28:46.640 ************************************ 00:28:46.640 07:29:20 -- common/autotest_common.sh@1102 -- # invalid_arguments 00:28:46.640 07:29:20 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:28:46.640 07:29:20 -- common/autotest_common.sh@638 -- # local es=0 00:28:46.640 07:29:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:28:46.640 07:29:20 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:46.640 07:29:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:46.640 07:29:20 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:46.640 07:29:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:46.640 07:29:20 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:46.640 07:29:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:46.640 07:29:20 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:46.640 07:29:20 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:46.640 07:29:20 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:28:46.899 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:28:46.899 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:28:46.899 options: 00:28:46.899 -c, --config JSON config file (default none) 00:28:46.899 --json JSON config file (default none) 00:28:46.899 --json-ignore-init-errors 00:28:46.899 don't exit on invalid config entry 00:28:46.899 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:28:46.899 -g, --single-file-segments 00:28:46.899 force creating just one hugetlbfs file 00:28:46.899 -h, --help show this usage 00:28:46.899 -i, --shm-id shared memory ID (optional) 00:28:46.899 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:28:46.899 --lcores lcore to CPU mapping list. The list is in the format: 00:28:46.899 [<,lcores[@CPUs]>...] 00:28:46.899 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:28:46.899 Within the group, '-' is used for range separator, 00:28:46.899 ',' is used for single number separator. 00:28:46.899 '( )' can be omitted for single element group, 00:28:46.899 '@' can be omitted if cpus and lcores have the same value 00:28:46.899 -n, --mem-channels channel number of memory channels used for DPDK 00:28:46.899 -p, --main-core main (primary) core for DPDK 00:28:46.899 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:28:46.899 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:28:46.899 --disable-cpumask-locks Disable CPU core lock files. 00:28:46.899 --silence-noticelog disable notice level logging to stderr 00:28:46.899 --msg-mempool-size global message memory pool size in count (default: 262143) 00:28:46.899 -u, --no-pci disable PCI access 00:28:46.899 --wait-for-rpc wait for RPCs to initialize subsystems 00:28:46.899 --max-delay maximum reactor delay (in microseconds) 00:28:46.899 -B, --pci-blocked pci addr to block (can be used more than once) 00:28:46.899 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:28:46.899 -R, --huge-unlink unlink huge files after initialization 00:28:46.899 -v, --version print SPDK version 00:28:46.899 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:28:46.899 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:28:46.899 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:28:46.899 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:28:46.899 Tracepoints vary in size and can use more than one trace entry. 00:28:46.899 --rpcs-allowed comma-separated list of permitted RPCS 00:28:46.899 --env-context Opaque context for use of the env implementation 00:28:46.899 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:28:46.899 --no-huge run without using hugepages 00:28:46.899 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid5f, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:28:46.899 -e, --tpoint-group [:] 00:28:46.899 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:28:46.899 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:28:46.899 Groups and masks[2024-02-13 07:29:20.376975] spdk_dd.c:1461:main: *ERROR*: Invalid arguments 00:28:46.899 can be combined (e.g. thread,bdev:0x1). 00:28:46.899 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:28:46.899 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:28:46.899 [--------- DD Options ---------] 00:28:46.899 --if Input file. Must specify either --if or --ib. 00:28:46.899 --ib Input bdev. Must specifier either --if or --ib 00:28:46.899 --of Output file. Must specify either --of or --ob. 00:28:46.899 --ob Output bdev. Must specify either --of or --ob. 00:28:46.899 --iflag Input file flags. 00:28:46.899 --oflag Output file flags. 00:28:46.899 --bs I/O unit size (default: 4096) 00:28:46.899 --qd Queue depth (default: 2) 00:28:46.899 --count I/O unit count. The number of I/O units to copy. (default: all) 00:28:46.899 --skip Skip this many I/O units at start of input. (default: 0) 00:28:46.899 --seek Skip this many I/O units at start of output. (default: 0) 00:28:46.899 --aio Force usage of AIO. (by default io_uring is used if available) 00:28:46.899 --sparse Enable hole skipping in input target 00:28:46.899 Available iflag and oflag values: 00:28:46.899 append - append mode 00:28:46.899 direct - use direct I/O for data 00:28:46.899 directory - fail unless a directory 00:28:46.899 dsync - use synchronized I/O for data 00:28:46.899 noatime - do not update access time 00:28:46.899 noctty - do not assign controlling terminal from file 00:28:46.899 nofollow - do not follow symlinks 00:28:46.899 nonblock - use non-blocking I/O 00:28:46.899 sync - use synchronized I/O for data and metadata 00:28:46.899 ************************************ 00:28:46.899 END TEST dd_invalid_arguments 00:28:46.899 ************************************ 00:28:46.899 07:29:20 -- common/autotest_common.sh@641 -- # es=2 00:28:46.899 07:29:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:46.899 07:29:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:46.899 07:29:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:46.899 00:28:46.899 real 0m0.110s 00:28:46.899 user 0m0.059s 00:28:46.899 sys 0m0.050s 00:28:46.899 07:29:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:46.899 07:29:20 -- common/autotest_common.sh@10 -- # set +x 00:28:46.899 07:29:20 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:28:46.899 07:29:20 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:46.899 07:29:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:46.899 07:29:20 -- common/autotest_common.sh@10 -- # set +x 00:28:46.899 ************************************ 00:28:46.899 START TEST dd_double_input 00:28:46.899 ************************************ 00:28:46.899 07:29:20 -- common/autotest_common.sh@1102 -- # double_input 00:28:46.899 07:29:20 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:28:46.899 07:29:20 -- common/autotest_common.sh@638 -- # local es=0 00:28:46.900 07:29:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:28:46.900 07:29:20 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:46.900 07:29:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:46.900 07:29:20 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:46.900 07:29:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:46.900 07:29:20 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:46.900 07:29:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:46.900 07:29:20 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:46.900 07:29:20 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:46.900 07:29:20 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:28:46.900 [2024-02-13 07:29:20.543601] spdk_dd.c:1468:main: *ERROR*: You may specify either --if or --ib, but not both. 00:28:46.900 07:29:20 -- common/autotest_common.sh@641 -- # es=22 00:28:46.900 07:29:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:46.900 07:29:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:46.900 07:29:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:46.900 00:28:46.900 real 0m0.111s 00:28:46.900 user 0m0.040s 00:28:46.900 sys 0m0.068s 00:28:46.900 07:29:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:46.900 07:29:20 -- common/autotest_common.sh@10 -- # set +x 00:28:46.900 ************************************ 00:28:46.900 END TEST dd_double_input 00:28:46.900 ************************************ 00:28:47.158 07:29:20 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:28:47.158 07:29:20 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:47.158 07:29:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:47.158 07:29:20 -- common/autotest_common.sh@10 -- # set +x 00:28:47.158 ************************************ 00:28:47.158 START TEST dd_double_output 00:28:47.158 ************************************ 00:28:47.158 07:29:20 -- common/autotest_common.sh@1102 -- # double_output 00:28:47.159 07:29:20 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:28:47.159 07:29:20 -- common/autotest_common.sh@638 -- # local es=0 00:28:47.159 07:29:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:28:47.159 07:29:20 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.159 07:29:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:47.159 07:29:20 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.159 07:29:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:47.159 07:29:20 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.159 07:29:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:47.159 07:29:20 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.159 07:29:20 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:47.159 07:29:20 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:28:47.159 [2024-02-13 07:29:20.708286] spdk_dd.c:1474:main: *ERROR*: You may specify either --of or --ob, but not both. 00:28:47.159 07:29:20 -- common/autotest_common.sh@641 -- # es=22 00:28:47.159 07:29:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:47.159 07:29:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:47.159 07:29:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:47.159 00:28:47.159 real 0m0.112s 00:28:47.159 user 0m0.065s 00:28:47.159 sys 0m0.044s 00:28:47.159 07:29:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:47.159 07:29:20 -- common/autotest_common.sh@10 -- # set +x 00:28:47.159 ************************************ 00:28:47.159 END TEST dd_double_output 00:28:47.159 ************************************ 00:28:47.159 07:29:20 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:28:47.159 07:29:20 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:47.159 07:29:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:47.159 07:29:20 -- common/autotest_common.sh@10 -- # set +x 00:28:47.159 ************************************ 00:28:47.159 START TEST dd_no_input 00:28:47.159 ************************************ 00:28:47.159 07:29:20 -- common/autotest_common.sh@1102 -- # no_input 00:28:47.159 07:29:20 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:28:47.159 07:29:20 -- common/autotest_common.sh@638 -- # local es=0 00:28:47.159 07:29:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:28:47.159 07:29:20 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.159 07:29:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:47.159 07:29:20 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.159 07:29:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:47.159 07:29:20 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.159 07:29:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:47.159 07:29:20 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.159 07:29:20 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:47.159 07:29:20 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:28:47.432 [2024-02-13 07:29:20.877547] spdk_dd.c:1480:main: *ERROR*: You must specify either --if or --ib 00:28:47.432 07:29:20 -- common/autotest_common.sh@641 -- # es=22 00:28:47.432 07:29:20 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:47.432 07:29:20 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:47.432 07:29:20 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:47.432 00:28:47.432 real 0m0.114s 00:28:47.432 user 0m0.064s 00:28:47.432 sys 0m0.046s 00:28:47.432 07:29:20 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:47.432 07:29:20 -- common/autotest_common.sh@10 -- # set +x 00:28:47.432 ************************************ 00:28:47.432 END TEST dd_no_input 00:28:47.432 ************************************ 00:28:47.432 07:29:20 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:28:47.432 07:29:20 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:47.432 07:29:20 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:47.432 07:29:20 -- common/autotest_common.sh@10 -- # set +x 00:28:47.432 ************************************ 00:28:47.432 START TEST dd_no_output 00:28:47.432 ************************************ 00:28:47.432 07:29:20 -- common/autotest_common.sh@1102 -- # no_output 00:28:47.432 07:29:20 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:47.432 07:29:20 -- common/autotest_common.sh@638 -- # local es=0 00:28:47.432 07:29:20 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:47.432 07:29:20 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.432 07:29:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:47.432 07:29:20 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.432 07:29:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:47.432 07:29:20 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.432 07:29:20 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:47.432 07:29:20 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.432 07:29:20 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:47.432 07:29:20 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:28:47.432 [2024-02-13 07:29:21.051593] spdk_dd.c:1486:main: *ERROR*: You must specify either --of or --ob 00:28:47.432 ************************************ 00:28:47.432 END TEST dd_no_output 00:28:47.432 ************************************ 00:28:47.432 07:29:21 -- common/autotest_common.sh@641 -- # es=22 00:28:47.432 07:29:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:47.432 07:29:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:47.432 07:29:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:47.432 00:28:47.432 real 0m0.114s 00:28:47.432 user 0m0.055s 00:28:47.432 sys 0m0.057s 00:28:47.432 07:29:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:47.432 07:29:21 -- common/autotest_common.sh@10 -- # set +x 00:28:47.691 07:29:21 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:28:47.691 07:29:21 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:47.691 07:29:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:47.691 07:29:21 -- common/autotest_common.sh@10 -- # set +x 00:28:47.691 ************************************ 00:28:47.691 START TEST dd_wrong_blocksize 00:28:47.691 ************************************ 00:28:47.691 07:29:21 -- common/autotest_common.sh@1102 -- # wrong_blocksize 00:28:47.691 07:29:21 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:28:47.691 07:29:21 -- common/autotest_common.sh@638 -- # local es=0 00:28:47.691 07:29:21 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:28:47.691 07:29:21 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.691 07:29:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:47.691 07:29:21 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.691 07:29:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:47.691 07:29:21 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.691 07:29:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:47.691 07:29:21 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.691 07:29:21 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:47.691 07:29:21 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:28:47.691 [2024-02-13 07:29:21.217716] spdk_dd.c:1492:main: *ERROR*: Invalid --bs value 00:28:47.691 07:29:21 -- common/autotest_common.sh@641 -- # es=22 00:28:47.691 07:29:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:47.691 07:29:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:47.691 07:29:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:47.691 00:28:47.691 real 0m0.115s 00:28:47.691 user 0m0.066s 00:28:47.691 sys 0m0.046s 00:28:47.691 07:29:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:47.691 07:29:21 -- common/autotest_common.sh@10 -- # set +x 00:28:47.691 ************************************ 00:28:47.691 END TEST dd_wrong_blocksize 00:28:47.691 ************************************ 00:28:47.691 07:29:21 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:28:47.691 07:29:21 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:47.691 07:29:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:47.691 07:29:21 -- common/autotest_common.sh@10 -- # set +x 00:28:47.691 ************************************ 00:28:47.691 START TEST dd_smaller_blocksize 00:28:47.691 ************************************ 00:28:47.691 07:29:21 -- common/autotest_common.sh@1102 -- # smaller_blocksize 00:28:47.691 07:29:21 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:28:47.691 07:29:21 -- common/autotest_common.sh@638 -- # local es=0 00:28:47.691 07:29:21 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:28:47.691 07:29:21 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.691 07:29:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:47.691 07:29:21 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.691 07:29:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:47.691 07:29:21 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.691 07:29:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:47.691 07:29:21 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:47.691 07:29:21 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:47.691 07:29:21 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:28:47.950 [2024-02-13 07:29:21.398202] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:47.950 [2024-02-13 07:29:21.398405] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142190 ] 00:28:47.950 [2024-02-13 07:29:21.570126] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.208 [2024-02-13 07:29:21.823509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.775 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:28:48.775 [2024-02-13 07:29:22.430662] spdk_dd.c:1169:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:28:48.775 [2024-02-13 07:29:22.430777] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:49.712 [2024-02-13 07:29:23.080316] spdk_dd.c:1517:main: *ERROR*: Error occurred while performing copy 00:28:49.972 ************************************ 00:28:49.972 END TEST dd_smaller_blocksize 00:28:49.972 ************************************ 00:28:49.972 07:29:23 -- common/autotest_common.sh@641 -- # es=244 00:28:49.972 07:29:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:49.972 07:29:23 -- common/autotest_common.sh@650 -- # es=116 00:28:49.972 07:29:23 -- common/autotest_common.sh@651 -- # case "$es" in 00:28:49.972 07:29:23 -- common/autotest_common.sh@658 -- # es=1 00:28:49.972 07:29:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:49.972 00:28:49.972 real 0m2.119s 00:28:49.972 user 0m1.494s 00:28:49.972 sys 0m0.522s 00:28:49.972 07:29:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:49.972 07:29:23 -- common/autotest_common.sh@10 -- # set +x 00:28:49.972 07:29:23 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:28:49.972 07:29:23 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:49.972 07:29:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:49.972 07:29:23 -- common/autotest_common.sh@10 -- # set +x 00:28:49.972 ************************************ 00:28:49.972 START TEST dd_invalid_count 00:28:49.972 ************************************ 00:28:49.972 07:29:23 -- common/autotest_common.sh@1102 -- # invalid_count 00:28:49.972 07:29:23 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:28:49.972 07:29:23 -- common/autotest_common.sh@638 -- # local es=0 00:28:49.972 07:29:23 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:28:49.972 07:29:23 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:49.972 07:29:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:49.972 07:29:23 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:49.972 07:29:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:49.972 07:29:23 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:49.972 07:29:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:49.972 07:29:23 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:49.972 07:29:23 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:49.972 07:29:23 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:28:49.972 [2024-02-13 07:29:23.573579] spdk_dd.c:1498:main: *ERROR*: Invalid --count value 00:28:49.972 07:29:23 -- common/autotest_common.sh@641 -- # es=22 00:28:49.972 07:29:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:49.972 07:29:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:49.972 07:29:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:49.972 00:28:49.972 real 0m0.108s 00:28:49.972 user 0m0.055s 00:28:49.972 ************************************ 00:28:49.972 END TEST dd_invalid_count 00:28:49.972 ************************************ 00:28:49.972 sys 0m0.049s 00:28:49.972 07:29:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:49.972 07:29:23 -- common/autotest_common.sh@10 -- # set +x 00:28:49.972 07:29:23 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:28:49.972 07:29:23 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:49.972 07:29:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:49.972 07:29:23 -- common/autotest_common.sh@10 -- # set +x 00:28:50.232 ************************************ 00:28:50.232 START TEST dd_invalid_oflag 00:28:50.232 ************************************ 00:28:50.232 07:29:23 -- common/autotest_common.sh@1102 -- # invalid_oflag 00:28:50.232 07:29:23 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:28:50.232 07:29:23 -- common/autotest_common.sh@638 -- # local es=0 00:28:50.232 07:29:23 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:28:50.232 07:29:23 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:50.232 07:29:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:50.232 07:29:23 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:50.232 07:29:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:50.232 07:29:23 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:50.232 07:29:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:50.232 07:29:23 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:50.232 07:29:23 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:50.232 07:29:23 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:28:50.232 [2024-02-13 07:29:23.725325] spdk_dd.c:1504:main: *ERROR*: --oflags may be used only with --of 00:28:50.232 07:29:23 -- common/autotest_common.sh@641 -- # es=22 00:28:50.232 07:29:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:50.232 07:29:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:50.232 07:29:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:50.232 00:28:50.232 real 0m0.098s 00:28:50.232 user 0m0.064s 00:28:50.232 sys 0m0.032s 00:28:50.232 07:29:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:50.232 07:29:23 -- common/autotest_common.sh@10 -- # set +x 00:28:50.232 ************************************ 00:28:50.232 END TEST dd_invalid_oflag 00:28:50.232 ************************************ 00:28:50.232 07:29:23 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:28:50.232 07:29:23 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:50.232 07:29:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:50.232 07:29:23 -- common/autotest_common.sh@10 -- # set +x 00:28:50.232 ************************************ 00:28:50.232 START TEST dd_invalid_iflag 00:28:50.232 ************************************ 00:28:50.232 07:29:23 -- common/autotest_common.sh@1102 -- # invalid_iflag 00:28:50.232 07:29:23 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:28:50.232 07:29:23 -- common/autotest_common.sh@638 -- # local es=0 00:28:50.232 07:29:23 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:28:50.232 07:29:23 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:50.232 07:29:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:50.232 07:29:23 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:50.232 07:29:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:50.232 07:29:23 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:50.232 07:29:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:50.232 07:29:23 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:50.232 07:29:23 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:50.232 07:29:23 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:28:50.232 [2024-02-13 07:29:23.894417] spdk_dd.c:1510:main: *ERROR*: --iflags may be used only with --if 00:28:50.491 07:29:23 -- common/autotest_common.sh@641 -- # es=22 00:28:50.491 07:29:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:50.491 07:29:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:50.491 07:29:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:50.491 00:28:50.491 real 0m0.113s 00:28:50.491 user 0m0.068s 00:28:50.491 sys 0m0.042s 00:28:50.491 07:29:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:50.491 07:29:23 -- common/autotest_common.sh@10 -- # set +x 00:28:50.491 ************************************ 00:28:50.491 END TEST dd_invalid_iflag 00:28:50.491 ************************************ 00:28:50.491 07:29:23 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:28:50.491 07:29:23 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:50.491 07:29:23 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:50.491 07:29:23 -- common/autotest_common.sh@10 -- # set +x 00:28:50.491 ************************************ 00:28:50.491 START TEST dd_unknown_flag 00:28:50.491 ************************************ 00:28:50.491 07:29:23 -- common/autotest_common.sh@1102 -- # unknown_flag 00:28:50.491 07:29:23 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:28:50.491 07:29:23 -- common/autotest_common.sh@638 -- # local es=0 00:28:50.491 07:29:23 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:28:50.492 07:29:23 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:50.492 07:29:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:50.492 07:29:23 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:50.492 07:29:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:50.492 07:29:23 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:50.492 07:29:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:50.492 07:29:23 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:50.492 07:29:24 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:50.492 07:29:24 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:28:50.492 [2024-02-13 07:29:24.063626] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:50.492 [2024-02-13 07:29:24.063841] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142319 ] 00:28:50.751 [2024-02-13 07:29:24.226973] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.751 [2024-02-13 07:29:24.411674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.010 [2024-02-13 07:29:24.691543] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:28:51.010 [2024-02-13 07:29:24.691642] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:51.010  Copying: 0/0 [B] (average 0 Bps)[2024-02-13 07:29:24.691816] app.c: 895:app_stop: *NOTICE*: spdk_app_stop called twice 00:28:51.946 [2024-02-13 07:29:25.321460] spdk_dd.c:1517:main: *ERROR*: Error occurred while performing copy 00:28:52.205 00:28:52.205 00:28:52.205 07:29:25 -- common/autotest_common.sh@641 -- # es=234 00:28:52.205 07:29:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:52.205 07:29:25 -- common/autotest_common.sh@650 -- # es=106 00:28:52.205 07:29:25 -- common/autotest_common.sh@651 -- # case "$es" in 00:28:52.205 07:29:25 -- common/autotest_common.sh@658 -- # es=1 00:28:52.205 07:29:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:52.205 00:28:52.205 real 0m1.749s 00:28:52.205 user 0m1.346s 00:28:52.205 sys 0m0.263s 00:28:52.205 07:29:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:52.205 07:29:25 -- common/autotest_common.sh@10 -- # set +x 00:28:52.205 ************************************ 00:28:52.205 END TEST dd_unknown_flag 00:28:52.205 ************************************ 00:28:52.205 07:29:25 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:28:52.205 07:29:25 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:28:52.206 07:29:25 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:52.206 07:29:25 -- common/autotest_common.sh@10 -- # set +x 00:28:52.206 ************************************ 00:28:52.206 START TEST dd_invalid_json 00:28:52.206 ************************************ 00:28:52.206 07:29:25 -- common/autotest_common.sh@1102 -- # invalid_json 00:28:52.206 07:29:25 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:28:52.206 07:29:25 -- common/autotest_common.sh@638 -- # local es=0 00:28:52.206 07:29:25 -- dd/negative_dd.sh@95 -- # : 00:28:52.206 07:29:25 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:28:52.206 07:29:25 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:52.206 07:29:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:52.206 07:29:25 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:52.206 07:29:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:52.206 07:29:25 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:52.206 07:29:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:52.206 07:29:25 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:28:52.206 07:29:25 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:28:52.206 07:29:25 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:28:52.206 [2024-02-13 07:29:25.872408] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:52.206 [2024-02-13 07:29:25.872618] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142366 ] 00:28:52.465 [2024-02-13 07:29:26.041096] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.725 [2024-02-13 07:29:26.225548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.725 [2024-02-13 07:29:26.225676] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:28:52.725 [2024-02-13 07:29:26.225815] json_config.c: 525:parse_json: *ERROR*: JSON data cannot be empty 00:28:52.725 [2024-02-13 07:29:26.225849] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:28:52.725 [2024-02-13 07:29:26.225893] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:52.725 [2024-02-13 07:29:26.225959] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:28:52.725 [2024-02-13 07:29:26.226011] spdk_dd.c:1517:main: *ERROR*: Error occurred while performing copy 00:28:52.984 07:29:26 -- common/autotest_common.sh@641 -- # es=234 00:28:52.984 07:29:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:52.984 07:29:26 -- common/autotest_common.sh@650 -- # es=106 00:28:52.984 ************************************ 00:28:52.984 END TEST dd_invalid_json 00:28:52.984 ************************************ 00:28:52.984 07:29:26 -- common/autotest_common.sh@651 -- # case "$es" in 00:28:52.984 07:29:26 -- common/autotest_common.sh@658 -- # es=1 00:28:52.984 07:29:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:52.984 00:28:52.984 real 0m0.760s 00:28:52.984 user 0m0.514s 00:28:52.984 sys 0m0.147s 00:28:52.984 07:29:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:52.984 07:29:26 -- common/autotest_common.sh@10 -- # set +x 00:28:52.984 00:28:52.984 real 0m6.386s 00:28:52.984 user 0m4.258s 00:28:52.984 sys 0m1.670s 00:28:52.984 ************************************ 00:28:52.984 END TEST spdk_dd_negative 00:28:52.984 ************************************ 00:28:52.984 07:29:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:52.984 07:29:26 -- common/autotest_common.sh@10 -- # set +x 00:28:52.984 00:28:52.984 real 2m27.968s 00:28:52.984 user 1m54.371s 00:28:52.984 sys 0m23.467s 00:28:52.984 ************************************ 00:28:52.984 END TEST spdk_dd 00:28:52.984 ************************************ 00:28:52.984 07:29:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:52.984 07:29:26 -- common/autotest_common.sh@10 -- # set +x 00:28:52.984 07:29:26 -- spdk/autotest.sh@217 -- # '[' 1 -eq 1 ']' 00:28:52.984 07:29:26 -- spdk/autotest.sh@218 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:28:52.984 07:29:26 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:28:52.984 07:29:26 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:52.984 07:29:26 -- common/autotest_common.sh@10 -- # set +x 00:28:53.245 ************************************ 00:28:53.245 START TEST blockdev_nvme 00:28:53.245 ************************************ 00:28:53.245 07:29:26 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:28:53.245 * Looking for test storage... 00:28:53.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:28:53.245 07:29:26 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:28:53.245 07:29:26 -- bdev/nbd_common.sh@6 -- # set -e 00:28:53.245 07:29:26 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:28:53.245 07:29:26 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:28:53.245 07:29:26 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:28:53.245 07:29:26 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:28:53.245 07:29:26 -- bdev/blockdev.sh@18 -- # : 00:28:53.245 07:29:26 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:28:53.245 07:29:26 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:28:53.245 07:29:26 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:28:53.245 07:29:26 -- bdev/blockdev.sh@672 -- # uname -s 00:28:53.245 07:29:26 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:28:53.245 07:29:26 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:28:53.245 07:29:26 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:28:53.245 07:29:26 -- bdev/blockdev.sh@681 -- # crypto_device= 00:28:53.245 07:29:26 -- bdev/blockdev.sh@682 -- # dek= 00:28:53.245 07:29:26 -- bdev/blockdev.sh@683 -- # env_ctx= 00:28:53.245 07:29:26 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:28:53.245 07:29:26 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:28:53.245 07:29:26 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:28:53.245 07:29:26 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:28:53.245 07:29:26 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:28:53.245 07:29:26 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=142461 00:28:53.245 07:29:26 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:28:53.245 07:29:26 -- bdev/blockdev.sh@47 -- # waitforlisten 142461 00:28:53.245 07:29:26 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:28:53.245 07:29:26 -- common/autotest_common.sh@817 -- # '[' -z 142461 ']' 00:28:53.245 07:29:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.245 07:29:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:53.245 07:29:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.245 07:29:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:53.245 07:29:26 -- common/autotest_common.sh@10 -- # set +x 00:28:53.245 [2024-02-13 07:29:26.856025] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:53.245 [2024-02-13 07:29:26.857094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142461 ] 00:28:53.504 [2024-02-13 07:29:27.025757] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.764 [2024-02-13 07:29:27.209587] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:53.764 [2024-02-13 07:29:27.209822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.142 07:29:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:55.143 07:29:28 -- common/autotest_common.sh@850 -- # return 0 00:28:55.143 07:29:28 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:28:55.143 07:29:28 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:28:55.143 07:29:28 -- bdev/blockdev.sh@79 -- # local json 00:28:55.143 07:29:28 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:28:55.143 07:29:28 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:28:55.143 07:29:28 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:28:55.143 07:29:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.143 07:29:28 -- common/autotest_common.sh@10 -- # set +x 00:28:55.143 07:29:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.143 07:29:28 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:28:55.143 07:29:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.143 07:29:28 -- common/autotest_common.sh@10 -- # set +x 00:28:55.143 07:29:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.143 07:29:28 -- bdev/blockdev.sh@738 -- # cat 00:28:55.143 07:29:28 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:28:55.143 07:29:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.143 07:29:28 -- common/autotest_common.sh@10 -- # set +x 00:28:55.143 07:29:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.143 07:29:28 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:28:55.143 07:29:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.143 07:29:28 -- common/autotest_common.sh@10 -- # set +x 00:28:55.143 07:29:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.143 07:29:28 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:28:55.143 07:29:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.143 07:29:28 -- common/autotest_common.sh@10 -- # set +x 00:28:55.143 07:29:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.143 07:29:28 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:28:55.143 07:29:28 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:28:55.143 07:29:28 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:28:55.143 07:29:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:55.143 07:29:28 -- common/autotest_common.sh@10 -- # set +x 00:28:55.143 07:29:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:55.143 07:29:28 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:28:55.143 07:29:28 -- bdev/blockdev.sh@747 -- # jq -r .name 00:28:55.143 07:29:28 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "8a16fc65-0ac7-42ee-908e-ed0f48ada621"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8a16fc65-0ac7-42ee-908e-ed0f48ada621",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:28:55.143 07:29:28 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:28:55.143 07:29:28 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:28:55.143 07:29:28 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:28:55.143 07:29:28 -- bdev/blockdev.sh@752 -- # killprocess 142461 00:28:55.143 07:29:28 -- common/autotest_common.sh@924 -- # '[' -z 142461 ']' 00:28:55.143 07:29:28 -- common/autotest_common.sh@928 -- # kill -0 142461 00:28:55.143 07:29:28 -- common/autotest_common.sh@929 -- # uname 00:28:55.143 07:29:28 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:28:55.143 07:29:28 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 142461 00:28:55.402 07:29:28 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:28:55.402 07:29:28 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:28:55.402 killing process with pid 142461 00:28:55.402 07:29:28 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 142461' 00:28:55.402 07:29:28 -- common/autotest_common.sh@943 -- # kill 142461 00:28:55.402 07:29:28 -- common/autotest_common.sh@948 -- # wait 142461 00:28:57.307 07:29:30 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:57.307 07:29:30 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:28:57.307 07:29:30 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:28:57.307 07:29:30 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:57.307 07:29:30 -- common/autotest_common.sh@10 -- # set +x 00:28:57.307 ************************************ 00:28:57.308 START TEST bdev_hello_world 00:28:57.308 ************************************ 00:28:57.308 07:29:30 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:28:57.308 [2024-02-13 07:29:30.835589] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:57.308 [2024-02-13 07:29:30.835747] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142573 ] 00:28:57.308 [2024-02-13 07:29:30.984690] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.566 [2024-02-13 07:29:31.158219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:57.566 [2024-02-13 07:29:31.158367] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:28:58.133 [2024-02-13 07:29:31.567143] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:28:58.133 [2024-02-13 07:29:31.567223] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:28:58.133 [2024-02-13 07:29:31.567276] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:28:58.133 [2024-02-13 07:29:31.569794] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:28:58.133 [2024-02-13 07:29:31.570359] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:28:58.133 [2024-02-13 07:29:31.570408] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:28:58.133 [2024-02-13 07:29:31.570687] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:28:58.133 00:28:58.133 [2024-02-13 07:29:31.570753] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:28:58.133 [2024-02-13 07:29:31.570826] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:28:59.081 00:28:59.081 real 0m1.786s 00:28:59.081 user 0m1.458s 00:28:59.081 sys 0m0.229s 00:28:59.081 ************************************ 00:28:59.081 END TEST bdev_hello_world 00:28:59.081 ************************************ 00:28:59.081 07:29:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:59.081 07:29:32 -- common/autotest_common.sh@10 -- # set +x 00:28:59.081 07:29:32 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:28:59.081 07:29:32 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:28:59.081 07:29:32 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:28:59.081 07:29:32 -- common/autotest_common.sh@10 -- # set +x 00:28:59.082 ************************************ 00:28:59.082 START TEST bdev_bounds 00:28:59.082 ************************************ 00:28:59.082 07:29:32 -- common/autotest_common.sh@1102 -- # bdev_bounds '' 00:28:59.082 07:29:32 -- bdev/blockdev.sh@288 -- # bdevio_pid=142618 00:28:59.082 Process bdevio pid: 142618 00:28:59.082 07:29:32 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:28:59.082 07:29:32 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:28:59.082 07:29:32 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 142618' 00:28:59.082 07:29:32 -- bdev/blockdev.sh@291 -- # waitforlisten 142618 00:28:59.082 07:29:32 -- common/autotest_common.sh@817 -- # '[' -z 142618 ']' 00:28:59.082 07:29:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.082 07:29:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:59.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.082 07:29:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.082 07:29:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:59.082 07:29:32 -- common/autotest_common.sh@10 -- # set +x 00:28:59.082 [2024-02-13 07:29:32.690817] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:28:59.082 [2024-02-13 07:29:32.690996] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142618 ] 00:28:59.341 [2024-02-13 07:29:32.854622] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:59.600 [2024-02-13 07:29:33.044095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.600 [2024-02-13 07:29:33.044239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:59.600 [2024-02-13 07:29:33.044257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.600 [2024-02-13 07:29:33.044516] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:28:59.859 07:29:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:59.859 07:29:33 -- common/autotest_common.sh@850 -- # return 0 00:28:59.859 07:29:33 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:00.118 I/O targets: 00:29:00.118 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:29:00.118 00:29:00.118 00:29:00.118 CUnit - A unit testing framework for C - Version 2.1-3 00:29:00.118 http://cunit.sourceforge.net/ 00:29:00.118 00:29:00.118 00:29:00.118 Suite: bdevio tests on: Nvme0n1 00:29:00.118 Test: blockdev write read block ...passed 00:29:00.118 Test: blockdev write zeroes read block ...passed 00:29:00.118 Test: blockdev write zeroes read no split ...passed 00:29:00.118 Test: blockdev write zeroes read split ...passed 00:29:00.118 Test: blockdev write zeroes read split partial ...passed 00:29:00.118 Test: blockdev reset ...[2024-02-13 07:29:33.705667] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:00.118 [2024-02-13 07:29:33.709329] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:00.118 passed 00:29:00.118 Test: blockdev write read 8 blocks ...passed 00:29:00.118 Test: blockdev write read size > 128k ...passed 00:29:00.118 Test: blockdev write read invalid size ...passed 00:29:00.118 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:00.118 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:00.118 Test: blockdev write read max offset ...passed 00:29:00.118 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:00.118 Test: blockdev writev readv 8 blocks ...passed 00:29:00.118 Test: blockdev writev readv 30 x 1block ...passed 00:29:00.118 Test: blockdev writev readv block ...passed 00:29:00.118 Test: blockdev writev readv size > 128k ...passed 00:29:00.118 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:00.118 Test: blockdev comparev and writev ...[2024-02-13 07:29:33.717219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0xa9c0d000 len:0x1000 00:29:00.118 [2024-02-13 07:29:33.717348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:00.118 passed 00:29:00.118 Test: blockdev nvme passthru rw ...passed 00:29:00.118 Test: blockdev nvme passthru vendor specific ...[2024-02-13 07:29:33.718284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:29:00.118 [2024-02-13 07:29:33.718339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:29:00.118 passed 00:29:00.118 Test: blockdev nvme admin passthru ...passed 00:29:00.118 Test: blockdev copy ...passed 00:29:00.118 00:29:00.118 Run Summary: Type Total Ran Passed Failed Inactive 00:29:00.118 suites 1 1 n/a 0 0 00:29:00.118 tests 23 23 23 0 0 00:29:00.118 asserts 152 152 152 0 n/a 00:29:00.118 00:29:00.118 Elapsed time = 0.195 seconds 00:29:00.118 0 00:29:00.118 07:29:33 -- bdev/blockdev.sh@293 -- # killprocess 142618 00:29:00.118 07:29:33 -- common/autotest_common.sh@924 -- # '[' -z 142618 ']' 00:29:00.118 07:29:33 -- common/autotest_common.sh@928 -- # kill -0 142618 00:29:00.118 07:29:33 -- common/autotest_common.sh@929 -- # uname 00:29:00.118 07:29:33 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:00.118 07:29:33 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 142618 00:29:00.118 07:29:33 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:29:00.118 killing process with pid 142618 00:29:00.119 07:29:33 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:29:00.119 07:29:33 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 142618' 00:29:00.119 07:29:33 -- common/autotest_common.sh@943 -- # kill 142618 00:29:00.119 07:29:33 -- common/autotest_common.sh@948 -- # wait 142618 00:29:00.119 [2024-02-13 07:29:33.760600] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:29:01.080 07:29:34 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:29:01.081 00:29:01.081 real 0m2.135s 00:29:01.081 user 0m4.866s 00:29:01.081 sys 0m0.395s 00:29:01.081 07:29:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:01.081 ************************************ 00:29:01.081 END TEST bdev_bounds 00:29:01.081 ************************************ 00:29:01.081 07:29:34 -- common/autotest_common.sh@10 -- # set +x 00:29:01.340 07:29:34 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:01.340 07:29:34 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:29:01.340 07:29:34 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:01.340 07:29:34 -- common/autotest_common.sh@10 -- # set +x 00:29:01.340 ************************************ 00:29:01.340 START TEST bdev_nbd 00:29:01.340 ************************************ 00:29:01.340 07:29:34 -- common/autotest_common.sh@1102 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:29:01.340 07:29:34 -- bdev/blockdev.sh@298 -- # uname -s 00:29:01.340 07:29:34 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:29:01.340 07:29:34 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:01.340 07:29:34 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:01.340 07:29:34 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:29:01.340 07:29:34 -- bdev/blockdev.sh@302 -- # local bdev_all 00:29:01.340 07:29:34 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:29:01.340 07:29:34 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:29:01.340 07:29:34 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:29:01.340 07:29:34 -- bdev/blockdev.sh@309 -- # local nbd_all 00:29:01.340 07:29:34 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:29:01.340 07:29:34 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:29:01.340 07:29:34 -- bdev/blockdev.sh@312 -- # local nbd_list 00:29:01.340 07:29:34 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:29:01.340 07:29:34 -- bdev/blockdev.sh@313 -- # local bdev_list 00:29:01.340 07:29:34 -- bdev/blockdev.sh@316 -- # nbd_pid=142680 00:29:01.340 07:29:34 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:01.340 07:29:34 -- bdev/blockdev.sh@318 -- # waitforlisten 142680 /var/tmp/spdk-nbd.sock 00:29:01.340 07:29:34 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:01.340 07:29:34 -- common/autotest_common.sh@817 -- # '[' -z 142680 ']' 00:29:01.340 07:29:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:01.340 07:29:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:01.340 07:29:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:01.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:01.340 07:29:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:01.340 07:29:34 -- common/autotest_common.sh@10 -- # set +x 00:29:01.340 [2024-02-13 07:29:34.876755] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:01.340 [2024-02-13 07:29:34.877109] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:01.340 [2024-02-13 07:29:35.028674] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.599 [2024-02-13 07:29:35.204137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.599 [2024-02-13 07:29:35.205113] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:29:02.168 07:29:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:02.168 07:29:35 -- common/autotest_common.sh@850 -- # return 0 00:29:02.168 07:29:35 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1 00:29:02.168 07:29:35 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:02.168 07:29:35 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:29:02.168 07:29:35 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:02.168 07:29:35 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1 00:29:02.168 07:29:35 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:02.168 07:29:35 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:29:02.168 07:29:35 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:02.168 07:29:35 -- bdev/nbd_common.sh@24 -- # local i 00:29:02.168 07:29:35 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:02.168 07:29:35 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:02.168 07:29:35 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:02.168 07:29:35 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:29:02.428 07:29:36 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:02.428 07:29:36 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:02.428 07:29:36 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:02.428 07:29:36 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:29:02.428 07:29:36 -- common/autotest_common.sh@855 -- # local i 00:29:02.428 07:29:36 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:29:02.428 07:29:36 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:29:02.428 07:29:36 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:29:02.428 07:29:36 -- common/autotest_common.sh@859 -- # break 00:29:02.428 07:29:36 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:02.428 07:29:36 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:02.428 07:29:36 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:02.428 1+0 records in 00:29:02.428 1+0 records out 00:29:02.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109738 s, 3.7 MB/s 00:29:02.428 07:29:36 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:02.428 07:29:36 -- common/autotest_common.sh@872 -- # size=4096 00:29:02.428 07:29:36 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:02.428 07:29:36 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:29:02.428 07:29:36 -- common/autotest_common.sh@875 -- # return 0 00:29:02.428 07:29:36 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:02.428 07:29:36 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:29:02.428 07:29:36 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:02.687 07:29:36 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:02.687 { 00:29:02.687 "nbd_device": "/dev/nbd0", 00:29:02.687 "bdev_name": "Nvme0n1" 00:29:02.687 } 00:29:02.687 ]' 00:29:02.687 07:29:36 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:02.687 07:29:36 -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:02.687 { 00:29:02.687 "nbd_device": "/dev/nbd0", 00:29:02.687 "bdev_name": "Nvme0n1" 00:29:02.687 } 00:29:02.687 ]' 00:29:02.687 07:29:36 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:02.687 07:29:36 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:02.687 07:29:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:02.687 07:29:36 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:02.687 07:29:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:02.687 07:29:36 -- bdev/nbd_common.sh@51 -- # local i 00:29:02.687 07:29:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:02.687 07:29:36 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:02.946 07:29:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:02.946 07:29:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:02.946 07:29:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:02.946 07:29:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:02.946 07:29:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:02.946 07:29:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:02.946 07:29:36 -- bdev/nbd_common.sh@41 -- # break 00:29:02.946 07:29:36 -- bdev/nbd_common.sh@45 -- # return 0 00:29:02.946 07:29:36 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:02.946 07:29:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:02.946 07:29:36 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:03.205 07:29:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:03.205 07:29:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:03.205 07:29:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:03.205 07:29:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@65 -- # true 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@65 -- # count=0 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@122 -- # count=0 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@127 -- # return 0 00:29:03.464 07:29:36 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@12 -- # local i 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:03.464 07:29:36 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:29:03.464 /dev/nbd0 00:29:03.464 07:29:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:03.464 07:29:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:03.464 07:29:37 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:29:03.464 07:29:37 -- common/autotest_common.sh@855 -- # local i 00:29:03.464 07:29:37 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:29:03.464 07:29:37 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:29:03.464 07:29:37 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:29:03.464 07:29:37 -- common/autotest_common.sh@859 -- # break 00:29:03.464 07:29:37 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:03.464 07:29:37 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:03.464 07:29:37 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:03.465 1+0 records in 00:29:03.465 1+0 records out 00:29:03.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752031 s, 5.4 MB/s 00:29:03.465 07:29:37 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.465 07:29:37 -- common/autotest_common.sh@872 -- # size=4096 00:29:03.465 07:29:37 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:03.465 07:29:37 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:29:03.465 07:29:37 -- common/autotest_common.sh@875 -- # return 0 00:29:03.465 07:29:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:03.465 07:29:37 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:29:03.465 07:29:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:03.465 07:29:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:03.465 07:29:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:03.724 { 00:29:03.724 "nbd_device": "/dev/nbd0", 00:29:03.724 "bdev_name": "Nvme0n1" 00:29:03.724 } 00:29:03.724 ]' 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:03.724 { 00:29:03.724 "nbd_device": "/dev/nbd0", 00:29:03.724 "bdev_name": "Nvme0n1" 00:29:03.724 } 00:29:03.724 ]' 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@65 -- # count=1 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@66 -- # echo 1 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@95 -- # count=1 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:03.724 256+0 records in 00:29:03.724 256+0 records out 00:29:03.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.007744 s, 135 MB/s 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:03.724 07:29:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:03.983 256+0 records in 00:29:03.983 256+0 records out 00:29:03.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0577945 s, 18.1 MB/s 00:29:03.983 07:29:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:29:03.983 07:29:37 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:29:03.983 07:29:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:03.983 07:29:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:03.983 07:29:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:03.983 07:29:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:03.983 07:29:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:03.983 07:29:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:03.983 07:29:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:03.983 07:29:37 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:03.983 07:29:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:03.983 07:29:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:03.983 07:29:37 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:03.983 07:29:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:03.983 07:29:37 -- bdev/nbd_common.sh@51 -- # local i 00:29:03.983 07:29:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:03.983 07:29:37 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:04.242 07:29:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:04.242 07:29:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:04.242 07:29:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:04.242 07:29:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:04.242 07:29:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:04.242 07:29:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:04.243 07:29:37 -- bdev/nbd_common.sh@41 -- # break 00:29:04.243 07:29:37 -- bdev/nbd_common.sh@45 -- # return 0 00:29:04.243 07:29:37 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:04.243 07:29:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:04.243 07:29:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:04.502 07:29:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:04.502 07:29:37 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:04.502 07:29:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:04.502 07:29:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:04.502 07:29:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:04.502 07:29:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:04.502 07:29:38 -- bdev/nbd_common.sh@65 -- # true 00:29:04.502 07:29:38 -- bdev/nbd_common.sh@65 -- # count=0 00:29:04.502 07:29:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:04.502 07:29:38 -- bdev/nbd_common.sh@104 -- # count=0 00:29:04.502 07:29:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:04.502 07:29:38 -- bdev/nbd_common.sh@109 -- # return 0 00:29:04.502 07:29:38 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:04.502 07:29:38 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:04.502 07:29:38 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:29:04.502 07:29:38 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:29:04.502 07:29:38 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:29:04.502 07:29:38 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:04.761 malloc_lvol_verify 00:29:04.761 07:29:38 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:05.021 712cc832-77c4-4f05-8425-0b31c6fc7811 00:29:05.021 07:29:38 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:05.280 15f4f3c1-33d0-4dd0-8a30-dda6e8e737f1 00:29:05.280 07:29:38 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:05.280 /dev/nbd0 00:29:05.539 07:29:38 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:29:05.539 mke2fs 1.45.5 (07-Jan-2020) 00:29:05.539 00:29:05.539 Filesystem too small for a journal 00:29:05.539 Creating filesystem with 1024 4k blocks and 1024 inodes 00:29:05.539 00:29:05.539 Allocating group tables: 0/1 done 00:29:05.539 Writing inode tables: 0/1 done 00:29:05.539 Writing superblocks and filesystem accounting information: 0/1 done 00:29:05.539 00:29:05.539 07:29:38 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:29:05.539 07:29:38 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:05.539 07:29:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:05.539 07:29:38 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:05.539 07:29:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:05.539 07:29:38 -- bdev/nbd_common.sh@51 -- # local i 00:29:05.539 07:29:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:05.539 07:29:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:05.539 07:29:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:05.539 07:29:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:05.539 07:29:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:05.539 07:29:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:05.539 07:29:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:05.539 07:29:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:05.539 07:29:39 -- bdev/nbd_common.sh@41 -- # break 00:29:05.539 07:29:39 -- bdev/nbd_common.sh@45 -- # return 0 00:29:05.539 07:29:39 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:29:05.539 07:29:39 -- bdev/nbd_common.sh@147 -- # return 0 00:29:05.539 07:29:39 -- bdev/blockdev.sh@324 -- # killprocess 142680 00:29:05.539 07:29:39 -- common/autotest_common.sh@924 -- # '[' -z 142680 ']' 00:29:05.539 07:29:39 -- common/autotest_common.sh@928 -- # kill -0 142680 00:29:05.539 07:29:39 -- common/autotest_common.sh@929 -- # uname 00:29:05.539 07:29:39 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:05.539 07:29:39 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 142680 00:29:05.539 killing process with pid 142680 00:29:05.539 07:29:39 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:29:05.539 07:29:39 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:29:05.539 07:29:39 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 142680' 00:29:05.539 07:29:39 -- common/autotest_common.sh@943 -- # kill 142680 00:29:05.539 07:29:39 -- common/autotest_common.sh@948 -- # wait 142680 00:29:05.539 [2024-02-13 07:29:39.216267] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:29:06.918 ************************************ 00:29:06.918 END TEST bdev_nbd 00:29:06.918 ************************************ 00:29:06.918 07:29:40 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:29:06.918 00:29:06.918 real 0m5.442s 00:29:06.918 user 0m7.945s 00:29:06.918 sys 0m1.022s 00:29:06.918 07:29:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:06.918 07:29:40 -- common/autotest_common.sh@10 -- # set +x 00:29:06.918 07:29:40 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:29:06.918 07:29:40 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:29:06.918 skipping fio tests on NVMe due to multi-ns failures. 00:29:06.918 07:29:40 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:29:06.918 07:29:40 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:06.918 07:29:40 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:06.918 07:29:40 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:29:06.918 07:29:40 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:06.918 07:29:40 -- common/autotest_common.sh@10 -- # set +x 00:29:06.918 ************************************ 00:29:06.918 START TEST bdev_verify 00:29:06.918 ************************************ 00:29:06.918 07:29:40 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:06.918 [2024-02-13 07:29:40.396158] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:06.918 [2024-02-13 07:29:40.396636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142890 ] 00:29:06.918 [2024-02-13 07:29:40.568487] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:07.176 [2024-02-13 07:29:40.751107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.176 [2024-02-13 07:29:40.751124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.176 [2024-02-13 07:29:40.751602] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:29:07.742 Running I/O for 5 seconds... 00:29:13.012 00:29:13.012 Latency(us) 00:29:13.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.012 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:13.012 Verification LBA range: start 0x0 length 0xa0000 00:29:13.012 Nvme0n1 : 5.01 14477.84 56.55 0.00 0.00 8808.11 543.65 15252.01 00:29:13.012 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:13.012 Verification LBA range: start 0xa0000 length 0xa0000 00:29:13.012 Nvme0n1 : 5.01 14298.82 55.85 0.00 0.00 8915.47 603.23 17754.30 00:29:13.012 =================================================================================================================== 00:29:13.012 Total : 28776.66 112.41 0.00 0.00 8861.45 543.65 17754.30 00:29:13.012 [2024-02-13 07:29:46.185483] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:29:21.136 ************************************ 00:29:21.136 END TEST bdev_verify 00:29:21.136 ************************************ 00:29:21.136 00:29:21.136 real 0m13.388s 00:29:21.136 user 0m25.508s 00:29:21.136 sys 0m0.367s 00:29:21.136 07:29:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:21.136 07:29:53 -- common/autotest_common.sh@10 -- # set +x 00:29:21.136 07:29:53 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:21.136 07:29:53 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:29:21.137 07:29:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:21.137 07:29:53 -- common/autotest_common.sh@10 -- # set +x 00:29:21.137 ************************************ 00:29:21.137 START TEST bdev_verify_big_io 00:29:21.137 ************************************ 00:29:21.137 07:29:53 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:29:21.137 [2024-02-13 07:29:53.808290] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:21.137 [2024-02-13 07:29:53.808655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143093 ] 00:29:21.137 [2024-02-13 07:29:53.967156] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:21.137 [2024-02-13 07:29:54.141369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.137 [2024-02-13 07:29:54.141380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.137 [2024-02-13 07:29:54.141985] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:29:21.137 Running I/O for 5 seconds... 00:29:26.411 00:29:26.411 Latency(us) 00:29:26.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.411 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:29:26.411 Verification LBA range: start 0x0 length 0xa000 00:29:26.411 Nvme0n1 : 5.04 1976.02 123.50 0.00 0.00 63908.12 562.27 104380.97 00:29:26.411 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:29:26.411 Verification LBA range: start 0xa000 length 0xa000 00:29:26.411 Nvme0n1 : 5.04 2196.13 137.26 0.00 0.00 57532.56 532.48 95801.72 00:29:26.411 =================================================================================================================== 00:29:26.411 Total : 4172.15 260.76 0.00 0.00 60551.57 532.48 104380.97 00:29:26.411 [2024-02-13 07:29:59.643254] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:29:27.347 ************************************ 00:29:27.347 END TEST bdev_verify_big_io 00:29:27.347 ************************************ 00:29:27.347 00:29:27.347 real 0m7.271s 00:29:27.347 user 0m13.415s 00:29:27.347 sys 0m0.267s 00:29:27.347 07:30:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:27.347 07:30:01 -- common/autotest_common.sh@10 -- # set +x 00:29:27.606 07:30:01 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:27.606 07:30:01 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:29:27.606 07:30:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:27.606 07:30:01 -- common/autotest_common.sh@10 -- # set +x 00:29:27.606 ************************************ 00:29:27.606 START TEST bdev_write_zeroes 00:29:27.606 ************************************ 00:29:27.606 07:30:01 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:27.606 [2024-02-13 07:30:01.136914] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:27.606 [2024-02-13 07:30:01.137332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143218 ] 00:29:27.606 [2024-02-13 07:30:01.290214] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.865 [2024-02-13 07:30:01.463529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.865 [2024-02-13 07:30:01.463977] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:29:28.432 Running I/O for 1 seconds... 00:29:29.364 00:29:29.364 Latency(us) 00:29:29.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.364 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:29:29.364 Nvme0n1 : 1.00 69934.87 273.18 0.00 0.00 1825.59 554.82 11558.17 00:29:29.364 =================================================================================================================== 00:29:29.364 Total : 69934.87 273.18 0.00 0.00 1825.59 554.82 11558.17 00:29:29.364 [2024-02-13 07:30:02.887260] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:29:30.300 ************************************ 00:29:30.300 END TEST bdev_write_zeroes 00:29:30.300 ************************************ 00:29:30.300 00:29:30.300 real 0m2.818s 00:29:30.300 user 0m2.437s 00:29:30.300 sys 0m0.281s 00:29:30.300 07:30:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:30.300 07:30:03 -- common/autotest_common.sh@10 -- # set +x 00:29:30.300 07:30:03 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:30.300 07:30:03 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:29:30.300 07:30:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:30.300 07:30:03 -- common/autotest_common.sh@10 -- # set +x 00:29:30.300 ************************************ 00:29:30.300 START TEST bdev_json_nonenclosed 00:29:30.300 ************************************ 00:29:30.300 07:30:03 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:30.559 [2024-02-13 07:30:03.998133] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:30.559 [2024-02-13 07:30:03.998572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143275 ] 00:29:30.559 [2024-02-13 07:30:04.150926] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.828 [2024-02-13 07:30:04.321964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.828 [2024-02-13 07:30:04.322390] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:29:30.828 [2024-02-13 07:30:04.322618] json_config.c: 598:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:29:30.828 [2024-02-13 07:30:04.322748] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:30.828 [2024-02-13 07:30:04.322832] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:30.828 [2024-02-13 07:30:04.322969] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:29:31.089 ************************************ 00:29:31.089 END TEST bdev_json_nonenclosed 00:29:31.089 ************************************ 00:29:31.089 00:29:31.089 real 0m0.720s 00:29:31.089 user 0m0.484s 00:29:31.089 sys 0m0.134s 00:29:31.089 07:30:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:31.089 07:30:04 -- common/autotest_common.sh@10 -- # set +x 00:29:31.089 07:30:04 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:31.089 07:30:04 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:29:31.089 07:30:04 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:31.089 07:30:04 -- common/autotest_common.sh@10 -- # set +x 00:29:31.089 ************************************ 00:29:31.089 START TEST bdev_json_nonarray 00:29:31.089 ************************************ 00:29:31.089 07:30:04 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:29:31.089 [2024-02-13 07:30:04.766135] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:31.089 [2024-02-13 07:30:04.766491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143304 ] 00:29:31.347 [2024-02-13 07:30:04.918281] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.607 [2024-02-13 07:30:05.090271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.607 [2024-02-13 07:30:05.090686] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:29:31.607 [2024-02-13 07:30:05.090913] json_config.c: 604:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:29:31.607 [2024-02-13 07:30:05.091042] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:29:31.607 [2024-02-13 07:30:05.091117] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:31.607 [2024-02-13 07:30:05.091284] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:29:31.866 ************************************ 00:29:31.866 END TEST bdev_json_nonarray 00:29:31.866 ************************************ 00:29:31.866 00:29:31.866 real 0m0.701s 00:29:31.866 user 0m0.507s 00:29:31.866 sys 0m0.093s 00:29:31.866 07:30:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:31.866 07:30:05 -- common/autotest_common.sh@10 -- # set +x 00:29:31.866 07:30:05 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:29:31.866 07:30:05 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:29:31.866 07:30:05 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:29:31.866 07:30:05 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:29:31.866 07:30:05 -- bdev/blockdev.sh@809 -- # cleanup 00:29:31.866 07:30:05 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:29:31.866 07:30:05 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:31.866 07:30:05 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:29:31.866 07:30:05 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:29:31.866 07:30:05 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:29:31.866 07:30:05 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:29:31.866 ************************************ 00:29:31.866 END TEST blockdev_nvme 00:29:31.866 ************************************ 00:29:31.866 00:29:31.866 real 0m38.778s 00:29:31.866 user 1m1.195s 00:29:31.866 sys 0m3.532s 00:29:31.866 07:30:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:31.866 07:30:05 -- common/autotest_common.sh@10 -- # set +x 00:29:31.866 07:30:05 -- spdk/autotest.sh@219 -- # uname -s 00:29:31.866 07:30:05 -- spdk/autotest.sh@219 -- # [[ Linux == Linux ]] 00:29:31.866 07:30:05 -- spdk/autotest.sh@220 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:31.866 07:30:05 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:29:31.866 07:30:05 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:31.866 07:30:05 -- common/autotest_common.sh@10 -- # set +x 00:29:31.866 ************************************ 00:29:31.866 START TEST blockdev_nvme_gpt 00:29:31.866 ************************************ 00:29:31.866 07:30:05 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:29:32.138 * Looking for test storage... 00:29:32.138 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:29:32.138 07:30:05 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:29:32.138 07:30:05 -- bdev/nbd_common.sh@6 -- # set -e 00:29:32.138 07:30:05 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:29:32.138 07:30:05 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:32.138 07:30:05 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:29:32.138 07:30:05 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:29:32.138 07:30:05 -- bdev/blockdev.sh@18 -- # : 00:29:32.138 07:30:05 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:29:32.138 07:30:05 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:29:32.138 07:30:05 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:29:32.138 07:30:05 -- bdev/blockdev.sh@672 -- # uname -s 00:29:32.138 07:30:05 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:29:32.138 07:30:05 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:29:32.138 07:30:05 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:29:32.138 07:30:05 -- bdev/blockdev.sh@681 -- # crypto_device= 00:29:32.138 07:30:05 -- bdev/blockdev.sh@682 -- # dek= 00:29:32.138 07:30:05 -- bdev/blockdev.sh@683 -- # env_ctx= 00:29:32.138 07:30:05 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:29:32.138 07:30:05 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:29:32.138 07:30:05 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:29:32.138 07:30:05 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:29:32.138 07:30:05 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:29:32.138 07:30:05 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=143380 00:29:32.138 07:30:05 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:29:32.138 07:30:05 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:29:32.138 07:30:05 -- bdev/blockdev.sh@47 -- # waitforlisten 143380 00:29:32.138 07:30:05 -- common/autotest_common.sh@817 -- # '[' -z 143380 ']' 00:29:32.138 07:30:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.138 07:30:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:32.138 07:30:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.138 07:30:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:32.138 07:30:05 -- common/autotest_common.sh@10 -- # set +x 00:29:32.138 [2024-02-13 07:30:05.685093] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:32.138 [2024-02-13 07:30:05.686273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143380 ] 00:29:32.409 [2024-02-13 07:30:05.856452] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.409 [2024-02-13 07:30:06.036573] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:32.409 [2024-02-13 07:30:06.037082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.785 07:30:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:33.785 07:30:07 -- common/autotest_common.sh@850 -- # return 0 00:29:33.785 07:30:07 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:29:33.785 07:30:07 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:29:33.785 07:30:07 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:34.045 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:29:34.045 Waiting for block devices as requested 00:29:34.045 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:29:34.045 07:30:07 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:29:34.045 07:30:07 -- common/autotest_common.sh@1652 -- # zoned_devs=() 00:29:34.045 07:30:07 -- common/autotest_common.sh@1652 -- # local -gA zoned_devs 00:29:34.045 07:30:07 -- common/autotest_common.sh@1653 -- # local nvme bdf 00:29:34.045 07:30:07 -- common/autotest_common.sh@1655 -- # for nvme in /sys/block/nvme* 00:29:34.045 07:30:07 -- common/autotest_common.sh@1656 -- # is_block_zoned nvme0n1 00:29:34.045 07:30:07 -- common/autotest_common.sh@1645 -- # local device=nvme0n1 00:29:34.045 07:30:07 -- common/autotest_common.sh@1647 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:34.045 07:30:07 -- common/autotest_common.sh@1648 -- # [[ none != none ]] 00:29:34.045 07:30:07 -- bdev/blockdev.sh@105 -- # nvme_devs=(/sys/bus/pci/drivers/nvme/*/nvme/nvme*/nvme*n*) 00:29:34.045 07:30:07 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:29:34.045 07:30:07 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:29:34.045 07:30:07 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:29:34.045 07:30:07 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:29:34.045 07:30:07 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme0n1 00:29:34.045 07:30:07 -- bdev/blockdev.sh@111 -- # parted /dev/nvme0n1 -ms print 00:29:34.045 07:30:07 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:29:34.045 BYT; 00:29:34.045 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:29:34.045 07:30:07 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:29:34.045 BYT; 00:29:34.045 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:29:34.045 07:30:07 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme0n1 00:29:34.045 07:30:07 -- bdev/blockdev.sh@114 -- # break 00:29:34.045 07:30:07 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme0n1 ]] 00:29:34.045 07:30:07 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:29:34.045 07:30:07 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:29:34.045 07:30:07 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:29:34.983 07:30:08 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:29:34.983 07:30:08 -- scripts/common.sh@410 -- # local spdk_guid 00:29:34.983 07:30:08 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:34.983 07:30:08 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:34.983 07:30:08 -- scripts/common.sh@415 -- # IFS='()' 00:29:34.983 07:30:08 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:29:34.983 07:30:08 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:34.983 07:30:08 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:29:34.983 07:30:08 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:34.983 07:30:08 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:34.983 07:30:08 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:29:34.983 07:30:08 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:29:34.983 07:30:08 -- scripts/common.sh@422 -- # local spdk_guid 00:29:34.983 07:30:08 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:29:34.983 07:30:08 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:34.983 07:30:08 -- scripts/common.sh@427 -- # IFS='()' 00:29:34.983 07:30:08 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:29:34.983 07:30:08 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:29:34.983 07:30:08 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:29:34.983 07:30:08 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:34.983 07:30:08 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:34.983 07:30:08 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:29:34.983 07:30:08 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:29:36.362 The operation has completed successfully. 00:29:36.362 07:30:09 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:29:37.300 The operation has completed successfully. 00:29:37.300 07:30:10 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:29:37.559 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:29:37.559 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:29:38.496 07:30:12 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:29:38.496 07:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.496 07:30:12 -- common/autotest_common.sh@10 -- # set +x 00:29:38.496 [] 00:29:38.496 07:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.496 07:30:12 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:29:38.496 07:30:12 -- bdev/blockdev.sh@79 -- # local json 00:29:38.496 07:30:12 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:29:38.496 07:30:12 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:29:38.496 07:30:12 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } } ] }'\''' 00:29:38.496 07:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.496 07:30:12 -- common/autotest_common.sh@10 -- # set +x 00:29:38.496 07:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.496 07:30:12 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:29:38.496 07:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.496 07:30:12 -- common/autotest_common.sh@10 -- # set +x 00:29:38.496 07:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.496 07:30:12 -- bdev/blockdev.sh@738 -- # cat 00:29:38.496 07:30:12 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:29:38.496 07:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.497 07:30:12 -- common/autotest_common.sh@10 -- # set +x 00:29:38.497 07:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.497 07:30:12 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:29:38.497 07:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.497 07:30:12 -- common/autotest_common.sh@10 -- # set +x 00:29:38.756 07:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.756 07:30:12 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:29:38.756 07:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.756 07:30:12 -- common/autotest_common.sh@10 -- # set +x 00:29:38.756 07:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.756 07:30:12 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:29:38.756 07:30:12 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:29:38.756 07:30:12 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:29:38.756 07:30:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:38.756 07:30:12 -- common/autotest_common.sh@10 -- # set +x 00:29:38.756 07:30:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:38.756 07:30:12 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:29:38.756 07:30:12 -- bdev/blockdev.sh@747 -- # jq -r .name 00:29:38.756 07:30:12 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' 00:29:38.756 07:30:12 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:29:38.756 07:30:12 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:29:38.756 07:30:12 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:29:38.756 07:30:12 -- bdev/blockdev.sh@752 -- # killprocess 143380 00:29:38.756 07:30:12 -- common/autotest_common.sh@924 -- # '[' -z 143380 ']' 00:29:38.757 07:30:12 -- common/autotest_common.sh@928 -- # kill -0 143380 00:29:38.757 07:30:12 -- common/autotest_common.sh@929 -- # uname 00:29:38.757 07:30:12 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:38.757 07:30:12 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 143380 00:29:38.757 07:30:12 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:29:38.757 07:30:12 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:29:38.757 07:30:12 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 143380' 00:29:38.757 killing process with pid 143380 00:29:38.757 07:30:12 -- common/autotest_common.sh@943 -- # kill 143380 00:29:38.757 07:30:12 -- common/autotest_common.sh@948 -- # wait 143380 00:29:40.661 07:30:14 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:40.661 07:30:14 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:29:40.661 07:30:14 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:29:40.661 07:30:14 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:40.662 07:30:14 -- common/autotest_common.sh@10 -- # set +x 00:29:40.662 ************************************ 00:29:40.662 START TEST bdev_hello_world 00:29:40.662 ************************************ 00:29:40.662 07:30:14 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:29:40.662 [2024-02-13 07:30:14.338003] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:40.662 [2024-02-13 07:30:14.338528] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143948 ] 00:29:40.921 [2024-02-13 07:30:14.506258] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.180 [2024-02-13 07:30:14.698407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:41.180 [2024-02-13 07:30:14.698675] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:29:41.439 [2024-02-13 07:30:15.109414] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:29:41.439 [2024-02-13 07:30:15.109648] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:29:41.439 [2024-02-13 07:30:15.109731] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:29:41.439 [2024-02-13 07:30:15.112422] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:29:41.439 [2024-02-13 07:30:15.113007] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:29:41.439 [2024-02-13 07:30:15.113235] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:29:41.439 [2024-02-13 07:30:15.113594] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:29:41.439 00:29:41.439 [2024-02-13 07:30:15.113746] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:29:41.439 [2024-02-13 07:30:15.113870] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:29:42.818 ************************************ 00:29:42.818 END TEST bdev_hello_world 00:29:42.818 ************************************ 00:29:42.818 00:29:42.818 real 0m1.838s 00:29:42.818 user 0m1.460s 00:29:42.818 sys 0m0.277s 00:29:42.818 07:30:16 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:42.818 07:30:16 -- common/autotest_common.sh@10 -- # set +x 00:29:42.818 07:30:16 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:29:42.818 07:30:16 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:29:42.818 07:30:16 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:42.818 07:30:16 -- common/autotest_common.sh@10 -- # set +x 00:29:42.818 ************************************ 00:29:42.818 START TEST bdev_bounds 00:29:42.818 ************************************ 00:29:42.818 07:30:16 -- common/autotest_common.sh@1102 -- # bdev_bounds '' 00:29:42.818 07:30:16 -- bdev/blockdev.sh@288 -- # bdevio_pid=143998 00:29:42.818 07:30:16 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:42.818 07:30:16 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:29:42.818 07:30:16 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 143998' 00:29:42.818 Process bdevio pid: 143998 00:29:42.818 07:30:16 -- bdev/blockdev.sh@291 -- # waitforlisten 143998 00:29:42.818 07:30:16 -- common/autotest_common.sh@817 -- # '[' -z 143998 ']' 00:29:42.818 07:30:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.818 07:30:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:42.818 07:30:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.818 07:30:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:42.818 07:30:16 -- common/autotest_common.sh@10 -- # set +x 00:29:42.818 [2024-02-13 07:30:16.220924] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:42.818 [2024-02-13 07:30:16.221191] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143998 ] 00:29:42.818 [2024-02-13 07:30:16.384770] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:43.077 [2024-02-13 07:30:16.568616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.077 [2024-02-13 07:30:16.568795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.077 [2024-02-13 07:30:16.568800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:43.077 [2024-02-13 07:30:16.569033] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:29:43.646 07:30:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:43.646 07:30:17 -- common/autotest_common.sh@850 -- # return 0 00:29:43.646 07:30:17 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:29:43.646 I/O targets: 00:29:43.646 Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:29:43.646 Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:29:43.646 00:29:43.646 00:29:43.646 CUnit - A unit testing framework for C - Version 2.1-3 00:29:43.646 http://cunit.sourceforge.net/ 00:29:43.646 00:29:43.646 00:29:43.646 Suite: bdevio tests on: Nvme0n1p2 00:29:43.646 Test: blockdev write read block ...passed 00:29:43.646 Test: blockdev write zeroes read block ...passed 00:29:43.646 Test: blockdev write zeroes read no split ...passed 00:29:43.646 Test: blockdev write zeroes read split ...passed 00:29:43.646 Test: blockdev write zeroes read split partial ...passed 00:29:43.646 Test: blockdev reset ...[2024-02-13 07:30:17.328712] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:43.646 [2024-02-13 07:30:17.332793] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:43.646 passed 00:29:43.646 Test: blockdev write read 8 blocks ...passed 00:29:43.646 Test: blockdev write read size > 128k ...passed 00:29:43.646 Test: blockdev write read invalid size ...passed 00:29:43.646 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:43.646 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:43.646 Test: blockdev write read max offset ...passed 00:29:43.646 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:43.646 Test: blockdev writev readv 8 blocks ...passed 00:29:43.646 Test: blockdev writev readv 30 x 1block ...passed 00:29:43.646 Test: blockdev writev readv block ...passed 00:29:43.646 Test: blockdev writev readv size > 128k ...passed 00:29:43.905 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:43.906 Test: blockdev comparev and writev ...[2024-02-13 07:30:17.344288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0xa820b000 len:0x1000 00:29:43.906 [2024-02-13 07:30:17.344736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:43.906 passed 00:29:43.906 Test: blockdev nvme passthru rw ...passed 00:29:43.906 Test: blockdev nvme passthru vendor specific ...passed 00:29:43.906 Test: blockdev nvme admin passthru ...passed 00:29:43.906 Test: blockdev copy ...passed 00:29:43.906 Suite: bdevio tests on: Nvme0n1p1 00:29:43.906 Test: blockdev write read block ...passed 00:29:43.906 Test: blockdev write zeroes read block ...passed 00:29:43.906 Test: blockdev write zeroes read no split ...passed 00:29:43.906 Test: blockdev write zeroes read split ...passed 00:29:43.906 Test: blockdev write zeroes read split partial ...passed 00:29:43.906 Test: blockdev reset ...[2024-02-13 07:30:17.397623] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:29:43.906 [2024-02-13 07:30:17.400873] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:43.906 passed 00:29:43.906 Test: blockdev write read 8 blocks ...passed 00:29:43.906 Test: blockdev write read size > 128k ...passed 00:29:43.906 Test: blockdev write read invalid size ...passed 00:29:43.906 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:43.906 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:43.906 Test: blockdev write read max offset ...passed 00:29:43.906 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:43.906 Test: blockdev writev readv 8 blocks ...passed 00:29:43.906 Test: blockdev writev readv 30 x 1block ...passed 00:29:43.906 Test: blockdev writev readv block ...passed 00:29:43.906 Test: blockdev writev readv size > 128k ...passed 00:29:43.906 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:43.906 Test: blockdev comparev and writev ...[2024-02-13 07:30:17.410674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0xa820d000 len:0x1000 00:29:43.906 [2024-02-13 07:30:17.410898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:29:43.906 passed 00:29:43.906 Test: blockdev nvme passthru rw ...passed 00:29:43.906 Test: blockdev nvme passthru vendor specific ...passed 00:29:43.906 Test: blockdev nvme admin passthru ...passed 00:29:43.906 Test: blockdev copy ...passed 00:29:43.906 00:29:43.906 Run Summary: Type Total Ran Passed Failed Inactive 00:29:43.906 suites 2 2 n/a 0 0 00:29:43.906 tests 46 46 46 0 0 00:29:43.906 asserts 284 284 284 0 n/a 00:29:43.906 00:29:43.906 Elapsed time = 0.370 seconds 00:29:43.906 0 00:29:43.906 07:30:17 -- bdev/blockdev.sh@293 -- # killprocess 143998 00:29:43.906 07:30:17 -- common/autotest_common.sh@924 -- # '[' -z 143998 ']' 00:29:43.906 07:30:17 -- common/autotest_common.sh@928 -- # kill -0 143998 00:29:43.906 07:30:17 -- common/autotest_common.sh@929 -- # uname 00:29:43.906 07:30:17 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:43.906 07:30:17 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 143998 00:29:43.906 killing process with pid 143998 00:29:43.906 07:30:17 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:29:43.906 07:30:17 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:29:43.906 07:30:17 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 143998' 00:29:43.906 07:30:17 -- common/autotest_common.sh@943 -- # kill 143998 00:29:43.906 07:30:17 -- common/autotest_common.sh@948 -- # wait 143998 00:29:43.906 [2024-02-13 07:30:17.448517] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:29:44.843 ************************************ 00:29:44.843 END TEST bdev_bounds 00:29:44.843 ************************************ 00:29:44.843 07:30:18 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:29:44.843 00:29:44.843 real 0m2.311s 00:29:44.843 user 0m5.482s 00:29:44.843 sys 0m0.386s 00:29:44.843 07:30:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:44.843 07:30:18 -- common/autotest_common.sh@10 -- # set +x 00:29:44.843 07:30:18 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:29:44.843 07:30:18 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:29:44.843 07:30:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:44.843 07:30:18 -- common/autotest_common.sh@10 -- # set +x 00:29:44.843 ************************************ 00:29:44.843 START TEST bdev_nbd 00:29:44.843 ************************************ 00:29:44.843 07:30:18 -- common/autotest_common.sh@1102 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' '' 00:29:44.843 07:30:18 -- bdev/blockdev.sh@298 -- # uname -s 00:29:45.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:29:45.103 07:30:18 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:29:45.103 07:30:18 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:45.103 07:30:18 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:29:45.103 07:30:18 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:29:45.103 07:30:18 -- bdev/blockdev.sh@302 -- # local bdev_all 00:29:45.103 07:30:18 -- bdev/blockdev.sh@303 -- # local bdev_num=2 00:29:45.103 07:30:18 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:29:45.103 07:30:18 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:29:45.103 07:30:18 -- bdev/blockdev.sh@309 -- # local nbd_all 00:29:45.103 07:30:18 -- bdev/blockdev.sh@310 -- # bdev_num=2 00:29:45.103 07:30:18 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:29:45.103 07:30:18 -- bdev/blockdev.sh@312 -- # local nbd_list 00:29:45.103 07:30:18 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:29:45.103 07:30:18 -- bdev/blockdev.sh@313 -- # local bdev_list 00:29:45.103 07:30:18 -- bdev/blockdev.sh@316 -- # nbd_pid=144061 00:29:45.103 07:30:18 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:29:45.103 07:30:18 -- bdev/blockdev.sh@318 -- # waitforlisten 144061 /var/tmp/spdk-nbd.sock 00:29:45.103 07:30:18 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:29:45.103 07:30:18 -- common/autotest_common.sh@817 -- # '[' -z 144061 ']' 00:29:45.103 07:30:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:29:45.103 07:30:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:45.103 07:30:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:29:45.103 07:30:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:45.103 07:30:18 -- common/autotest_common.sh@10 -- # set +x 00:29:45.103 [2024-02-13 07:30:18.616060] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:45.103 [2024-02-13 07:30:18.616448] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:45.103 [2024-02-13 07:30:18.786263] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.362 [2024-02-13 07:30:18.978291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.362 [2024-02-13 07:30:18.979084] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:29:45.930 07:30:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:45.930 07:30:19 -- common/autotest_common.sh@850 -- # return 0 00:29:45.930 07:30:19 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:29:45.930 07:30:19 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:45.930 07:30:19 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:29:45.930 07:30:19 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:29:45.930 07:30:19 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' 00:29:45.930 07:30:19 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:45.930 07:30:19 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:29:45.930 07:30:19 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:29:45.930 07:30:19 -- bdev/nbd_common.sh@24 -- # local i 00:29:45.930 07:30:19 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:29:45.930 07:30:19 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:29:45.930 07:30:19 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:29:45.930 07:30:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:29:46.189 07:30:19 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:29:46.189 07:30:19 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:29:46.189 07:30:19 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:29:46.189 07:30:19 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:29:46.189 07:30:19 -- common/autotest_common.sh@855 -- # local i 00:29:46.189 07:30:19 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:29:46.189 07:30:19 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:29:46.189 07:30:19 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:29:46.190 07:30:19 -- common/autotest_common.sh@859 -- # break 00:29:46.190 07:30:19 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:46.190 07:30:19 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:46.190 07:30:19 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:46.190 1+0 records in 00:29:46.190 1+0 records out 00:29:46.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000712033 s, 5.8 MB/s 00:29:46.190 07:30:19 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:46.190 07:30:19 -- common/autotest_common.sh@872 -- # size=4096 00:29:46.190 07:30:19 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:46.190 07:30:19 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:29:46.190 07:30:19 -- common/autotest_common.sh@875 -- # return 0 00:29:46.190 07:30:19 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:46.190 07:30:19 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:29:46.190 07:30:19 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:29:46.448 07:30:20 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:29:46.448 07:30:20 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:29:46.448 07:30:20 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:29:46.448 07:30:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:29:46.448 07:30:20 -- common/autotest_common.sh@855 -- # local i 00:29:46.448 07:30:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:29:46.448 07:30:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:29:46.448 07:30:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:29:46.448 07:30:20 -- common/autotest_common.sh@859 -- # break 00:29:46.448 07:30:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:46.448 07:30:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:46.448 07:30:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:46.448 1+0 records in 00:29:46.448 1+0 records out 00:29:46.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053348 s, 7.7 MB/s 00:29:46.448 07:30:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:46.448 07:30:20 -- common/autotest_common.sh@872 -- # size=4096 00:29:46.448 07:30:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:46.448 07:30:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:29:46.448 07:30:20 -- common/autotest_common.sh@875 -- # return 0 00:29:46.448 07:30:20 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:29:46.448 07:30:20 -- bdev/nbd_common.sh@27 -- # (( i < 2 )) 00:29:46.448 07:30:20 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:46.706 07:30:20 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:29:46.706 { 00:29:46.706 "nbd_device": "/dev/nbd0", 00:29:46.706 "bdev_name": "Nvme0n1p1" 00:29:46.706 }, 00:29:46.706 { 00:29:46.706 "nbd_device": "/dev/nbd1", 00:29:46.706 "bdev_name": "Nvme0n1p2" 00:29:46.706 } 00:29:46.706 ]' 00:29:46.706 07:30:20 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:29:46.706 07:30:20 -- bdev/nbd_common.sh@119 -- # echo '[ 00:29:46.706 { 00:29:46.706 "nbd_device": "/dev/nbd0", 00:29:46.706 "bdev_name": "Nvme0n1p1" 00:29:46.706 }, 00:29:46.706 { 00:29:46.706 "nbd_device": "/dev/nbd1", 00:29:46.706 "bdev_name": "Nvme0n1p2" 00:29:46.706 } 00:29:46.706 ]' 00:29:46.706 07:30:20 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:29:46.706 07:30:20 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:29:46.706 07:30:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:46.706 07:30:20 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:46.706 07:30:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:46.706 07:30:20 -- bdev/nbd_common.sh@51 -- # local i 00:29:46.706 07:30:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:46.706 07:30:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:46.982 07:30:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:46.982 07:30:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:46.982 07:30:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:46.982 07:30:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:46.982 07:30:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:46.982 07:30:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:46.982 07:30:20 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:46.982 07:30:20 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:46.982 07:30:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:46.982 07:30:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:47.253 07:30:20 -- bdev/nbd_common.sh@41 -- # break 00:29:47.253 07:30:20 -- bdev/nbd_common.sh@45 -- # return 0 00:29:47.253 07:30:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:47.253 07:30:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:47.253 07:30:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:47.253 07:30:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:47.253 07:30:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:47.253 07:30:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:47.253 07:30:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:47.253 07:30:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:47.253 07:30:20 -- bdev/nbd_common.sh@41 -- # break 00:29:47.253 07:30:20 -- bdev/nbd_common.sh@45 -- # return 0 00:29:47.253 07:30:20 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:47.253 07:30:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:47.253 07:30:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@65 -- # true 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@65 -- # count=0 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@122 -- # count=0 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@127 -- # return 0 00:29:47.821 07:30:21 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1' 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@12 -- # local i 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:47.821 07:30:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:29:48.080 /dev/nbd0 00:29:48.080 07:30:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:29:48.080 07:30:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:29:48.080 07:30:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:29:48.080 07:30:21 -- common/autotest_common.sh@855 -- # local i 00:29:48.080 07:30:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:29:48.080 07:30:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:29:48.080 07:30:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:29:48.080 07:30:21 -- common/autotest_common.sh@859 -- # break 00:29:48.080 07:30:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:48.080 07:30:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:48.080 07:30:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:48.080 1+0 records in 00:29:48.080 1+0 records out 00:29:48.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467104 s, 8.8 MB/s 00:29:48.080 07:30:21 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.080 07:30:21 -- common/autotest_common.sh@872 -- # size=4096 00:29:48.080 07:30:21 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.080 07:30:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:29:48.080 07:30:21 -- common/autotest_common.sh@875 -- # return 0 00:29:48.080 07:30:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:48.080 07:30:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:48.080 07:30:21 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:29:48.339 /dev/nbd1 00:29:48.339 07:30:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:29:48.339 07:30:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:29:48.339 07:30:21 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:29:48.339 07:30:21 -- common/autotest_common.sh@855 -- # local i 00:29:48.339 07:30:21 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:29:48.339 07:30:21 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:29:48.339 07:30:21 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:29:48.339 07:30:21 -- common/autotest_common.sh@859 -- # break 00:29:48.339 07:30:21 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:29:48.339 07:30:21 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:29:48.339 07:30:21 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:29:48.339 1+0 records in 00:29:48.339 1+0 records out 00:29:48.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000668001 s, 6.1 MB/s 00:29:48.339 07:30:21 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.339 07:30:21 -- common/autotest_common.sh@872 -- # size=4096 00:29:48.339 07:30:21 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:29:48.339 07:30:21 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:29:48.339 07:30:21 -- common/autotest_common.sh@875 -- # return 0 00:29:48.339 07:30:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:29:48.339 07:30:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:29:48.339 07:30:21 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:48.339 07:30:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:48.339 07:30:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:48.597 07:30:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:29:48.597 { 00:29:48.597 "nbd_device": "/dev/nbd0", 00:29:48.597 "bdev_name": "Nvme0n1p1" 00:29:48.597 }, 00:29:48.597 { 00:29:48.597 "nbd_device": "/dev/nbd1", 00:29:48.597 "bdev_name": "Nvme0n1p2" 00:29:48.597 } 00:29:48.597 ]' 00:29:48.597 07:30:22 -- bdev/nbd_common.sh@64 -- # echo '[ 00:29:48.597 { 00:29:48.597 "nbd_device": "/dev/nbd0", 00:29:48.598 "bdev_name": "Nvme0n1p1" 00:29:48.598 }, 00:29:48.598 { 00:29:48.598 "nbd_device": "/dev/nbd1", 00:29:48.598 "bdev_name": "Nvme0n1p2" 00:29:48.598 } 00:29:48.598 ]' 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:29:48.598 /dev/nbd1' 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:29:48.598 /dev/nbd1' 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@65 -- # count=2 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@66 -- # echo 2 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@95 -- # count=2 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@71 -- # local operation=write 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:29:48.598 256+0 records in 00:29:48.598 256+0 records out 00:29:48.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00693968 s, 151 MB/s 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:29:48.598 256+0 records in 00:29:48.598 256+0 records out 00:29:48.598 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0709846 s, 14.8 MB/s 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:29:48.598 07:30:22 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:29:48.857 256+0 records in 00:29:48.857 256+0 records out 00:29:48.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0890278 s, 11.8 MB/s 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@51 -- # local i 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:48.857 07:30:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:49.116 07:30:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:49.116 07:30:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:49.116 07:30:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:49.116 07:30:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:49.116 07:30:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:49.116 07:30:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:49.116 07:30:22 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:49.116 07:30:22 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:49.116 07:30:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:49.116 07:30:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:49.116 07:30:22 -- bdev/nbd_common.sh@41 -- # break 00:29:49.116 07:30:22 -- bdev/nbd_common.sh@45 -- # return 0 00:29:49.116 07:30:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:49.116 07:30:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:29:49.375 07:30:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:29:49.375 07:30:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:29:49.375 07:30:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:29:49.375 07:30:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:49.375 07:30:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:49.375 07:30:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:29:49.375 07:30:22 -- bdev/nbd_common.sh@41 -- # break 00:29:49.375 07:30:22 -- bdev/nbd_common.sh@45 -- # return 0 00:29:49.375 07:30:22 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:29:49.375 07:30:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:49.375 07:30:22 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:29:49.634 07:30:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:29:49.634 07:30:23 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:29:49.634 07:30:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:29:49.634 07:30:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:29:49.634 07:30:23 -- bdev/nbd_common.sh@65 -- # echo '' 00:29:49.634 07:30:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:29:49.634 07:30:23 -- bdev/nbd_common.sh@65 -- # true 00:29:49.634 07:30:23 -- bdev/nbd_common.sh@65 -- # count=0 00:29:49.634 07:30:23 -- bdev/nbd_common.sh@66 -- # echo 0 00:29:49.634 07:30:23 -- bdev/nbd_common.sh@104 -- # count=0 00:29:49.634 07:30:23 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:29:49.634 07:30:23 -- bdev/nbd_common.sh@109 -- # return 0 00:29:49.634 07:30:23 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:29:49.634 07:30:23 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:49.634 07:30:23 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:29:49.634 07:30:23 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:29:49.634 07:30:23 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:29:49.634 07:30:23 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:29:49.893 malloc_lvol_verify 00:29:49.893 07:30:23 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:29:50.152 28a530d6-164b-462f-a02a-96cad04d74de 00:29:50.152 07:30:23 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:29:50.411 a73cf2bf-dddf-4406-bb29-595f29a8476c 00:29:50.411 07:30:23 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:29:50.671 /dev/nbd0 00:29:50.671 07:30:24 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:29:50.671 mke2fs 1.45.5 (07-Jan-2020) 00:29:50.671 00:29:50.671 Filesystem too small for a journal 00:29:50.671 Creating filesystem with 1024 4k blocks and 1024 inodes 00:29:50.671 00:29:50.671 Allocating group tables: 0/1 done 00:29:50.671 Writing inode tables: 0/1 done 00:29:50.671 Writing superblocks and filesystem accounting information: 0/1 done 00:29:50.671 00:29:50.671 07:30:24 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:29:50.671 07:30:24 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:29:50.671 07:30:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:29:50.671 07:30:24 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:29:50.671 07:30:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:29:50.671 07:30:24 -- bdev/nbd_common.sh@51 -- # local i 00:29:50.671 07:30:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:29:50.671 07:30:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:29:50.671 07:30:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:29:50.671 07:30:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:29:50.671 07:30:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:29:50.671 07:30:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:29:50.671 07:30:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.671 07:30:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:50.671 07:30:24 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:29:50.930 07:30:24 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:29:50.930 07:30:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:29:50.930 07:30:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:29:50.930 07:30:24 -- bdev/nbd_common.sh@41 -- # break 00:29:50.930 07:30:24 -- bdev/nbd_common.sh@45 -- # return 0 00:29:50.930 07:30:24 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:29:50.930 07:30:24 -- bdev/nbd_common.sh@147 -- # return 0 00:29:50.930 07:30:24 -- bdev/blockdev.sh@324 -- # killprocess 144061 00:29:50.930 07:30:24 -- common/autotest_common.sh@924 -- # '[' -z 144061 ']' 00:29:50.930 07:30:24 -- common/autotest_common.sh@928 -- # kill -0 144061 00:29:50.930 07:30:24 -- common/autotest_common.sh@929 -- # uname 00:29:50.930 07:30:24 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:29:50.930 07:30:24 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 144061 00:29:50.930 07:30:24 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:29:50.930 07:30:24 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:29:50.930 07:30:24 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 144061' 00:29:50.930 killing process with pid 144061 00:29:50.930 07:30:24 -- common/autotest_common.sh@943 -- # kill 144061 00:29:50.930 [2024-02-13 07:30:24.477529] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:29:50.930 07:30:24 -- common/autotest_common.sh@948 -- # wait 144061 00:29:51.869 07:30:25 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:29:51.869 00:29:51.869 real 0m6.989s 00:29:51.869 user 0m9.845s 00:29:51.869 sys 0m1.687s 00:29:51.869 07:30:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:29:51.869 07:30:25 -- common/autotest_common.sh@10 -- # set +x 00:29:51.869 ************************************ 00:29:51.869 END TEST bdev_nbd 00:29:51.869 ************************************ 00:29:52.128 07:30:25 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:29:52.128 07:30:25 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:29:52.128 07:30:25 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:29:52.128 07:30:25 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:29:52.128 skipping fio tests on NVMe due to multi-ns failures. 00:29:52.128 07:30:25 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:52.128 07:30:25 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:52.128 07:30:25 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:29:52.128 07:30:25 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:29:52.128 07:30:25 -- common/autotest_common.sh@10 -- # set +x 00:29:52.128 ************************************ 00:29:52.128 START TEST bdev_verify 00:29:52.128 ************************************ 00:29:52.128 07:30:25 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:29:52.128 [2024-02-13 07:30:25.646862] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:29:52.128 [2024-02-13 07:30:25.647318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144343 ] 00:29:52.128 [2024-02-13 07:30:25.817817] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:52.388 [2024-02-13 07:30:25.992830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.388 [2024-02-13 07:30:25.992840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.388 [2024-02-13 07:30:25.993368] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:29:52.954 Running I/O for 5 seconds... 00:29:58.228 00:29:58.228 Latency(us) 00:29:58.228 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:58.228 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:58.228 Verification LBA range: start 0x0 length 0x4ff80 00:29:58.228 Nvme0n1p1 : 5.01 6881.41 26.88 0.00 0.00 18553.90 2204.39 21448.15 00:29:58.228 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:58.228 Verification LBA range: start 0x4ff80 length 0x4ff80 00:29:58.228 Nvme0n1p1 : 5.01 6640.38 25.94 0.00 0.00 19224.97 1392.64 28716.68 00:29:58.228 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:58.228 Verification LBA range: start 0x0 length 0x4ff7f 00:29:58.228 Nvme0n1p2 : 5.02 6886.22 26.90 0.00 0.00 18529.09 411.46 20614.05 00:29:58.228 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:29:58.228 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:29:58.228 Nvme0n1p2 : 5.02 6645.13 25.96 0.00 0.00 19182.30 1139.43 26571.87 00:29:58.228 =================================================================================================================== 00:29:58.228 Total : 27053.14 105.68 0.00 0.00 18866.71 411.46 28716.68 00:29:58.228 [2024-02-13 07:30:31.448890] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:30:02.413 ************************************ 00:30:02.413 END TEST bdev_verify 00:30:02.413 ************************************ 00:30:02.413 00:30:02.413 real 0m10.432s 00:30:02.413 user 0m19.605s 00:30:02.413 sys 0m0.365s 00:30:02.413 07:30:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:02.413 07:30:36 -- common/autotest_common.sh@10 -- # set +x 00:30:02.413 07:30:36 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:02.413 07:30:36 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:30:02.413 07:30:36 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:02.413 07:30:36 -- common/autotest_common.sh@10 -- # set +x 00:30:02.413 ************************************ 00:30:02.413 START TEST bdev_verify_big_io 00:30:02.413 ************************************ 00:30:02.413 07:30:36 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:30:02.672 [2024-02-13 07:30:36.142324] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:30:02.672 [2024-02-13 07:30:36.142775] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144480 ] 00:30:02.672 [2024-02-13 07:30:36.311072] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:02.930 [2024-02-13 07:30:36.503781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:02.930 [2024-02-13 07:30:36.503792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.930 [2024-02-13 07:30:36.504410] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:30:03.497 Running I/O for 5 seconds... 00:30:08.764 00:30:08.764 Latency(us) 00:30:08.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.764 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:08.764 Verification LBA range: start 0x0 length 0x4ff8 00:30:08.764 Nvme0n1p1 : 5.09 1006.33 62.90 0.00 0.00 125955.99 3291.69 197322.94 00:30:08.764 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:08.764 Verification LBA range: start 0x4ff8 length 0x4ff8 00:30:08.764 Nvme0n1p1 : 5.09 998.81 62.43 0.00 0.00 126713.93 3336.38 191603.43 00:30:08.764 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:30:08.764 Verification LBA range: start 0x0 length 0x4ff7 00:30:08.764 Nvme0n1p2 : 5.09 1005.80 62.86 0.00 0.00 124508.52 4021.53 143940.89 00:30:08.764 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:30:08.764 Verification LBA range: start 0x4ff7 length 0x4ff7 00:30:08.764 Nvme0n1p2 : 5.09 1006.43 62.90 0.00 0.00 124478.28 875.05 144894.14 00:30:08.764 =================================================================================================================== 00:30:08.764 Total : 4017.36 251.09 0.00 0.00 125411.45 875.05 197322.94 00:30:08.764 [2024-02-13 07:30:42.092323] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:30:09.699 ************************************ 00:30:09.699 END TEST bdev_verify_big_io 00:30:09.699 ************************************ 00:30:09.699 00:30:09.699 real 0m7.296s 00:30:09.699 user 0m13.405s 00:30:09.699 sys 0m0.297s 00:30:09.699 07:30:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:09.699 07:30:43 -- common/autotest_common.sh@10 -- # set +x 00:30:09.958 07:30:43 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:09.958 07:30:43 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:30:09.958 07:30:43 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:09.958 07:30:43 -- common/autotest_common.sh@10 -- # set +x 00:30:09.958 ************************************ 00:30:09.958 START TEST bdev_write_zeroes 00:30:09.958 ************************************ 00:30:09.958 07:30:43 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:09.958 [2024-02-13 07:30:43.494590] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:30:09.958 [2024-02-13 07:30:43.495001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144604 ] 00:30:10.216 [2024-02-13 07:30:43.659587] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:10.217 [2024-02-13 07:30:43.838831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:10.217 [2024-02-13 07:30:43.839261] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:30:10.783 Running I/O for 1 seconds... 00:30:11.716 00:30:11.716 Latency(us) 00:30:11.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.716 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:11.716 Nvme0n1p1 : 1.01 28217.17 110.22 0.00 0.00 4526.33 2398.02 13464.67 00:30:11.716 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:30:11.716 Nvme0n1p2 : 1.01 28206.36 110.18 0.00 0.00 4521.49 2442.71 13702.98 00:30:11.716 =================================================================================================================== 00:30:11.716 Total : 56423.53 220.40 0.00 0.00 4523.91 2398.02 13702.98 00:30:11.716 [2024-02-13 07:30:45.278744] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:30:12.652 ************************************ 00:30:12.652 END TEST bdev_write_zeroes 00:30:12.652 ************************************ 00:30:12.652 00:30:12.652 real 0m2.866s 00:30:12.652 user 0m2.509s 00:30:12.652 sys 0m0.256s 00:30:12.652 07:30:46 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:12.652 07:30:46 -- common/autotest_common.sh@10 -- # set +x 00:30:12.652 07:30:46 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:12.652 07:30:46 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:30:12.652 07:30:46 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:12.652 07:30:46 -- common/autotest_common.sh@10 -- # set +x 00:30:12.911 ************************************ 00:30:12.911 START TEST bdev_json_nonenclosed 00:30:12.911 ************************************ 00:30:12.911 07:30:46 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:12.911 [2024-02-13 07:30:46.419897] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:30:12.911 [2024-02-13 07:30:46.420314] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144661 ] 00:30:12.911 [2024-02-13 07:30:46.587911] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.169 [2024-02-13 07:30:46.803028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.169 [2024-02-13 07:30:46.803400] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:30:13.169 [2024-02-13 07:30:46.803638] json_config.c: 598:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:30:13.169 [2024-02-13 07:30:46.803770] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:13.169 [2024-02-13 07:30:46.803854] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:13.169 [2024-02-13 07:30:46.804052] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:30:13.737 ************************************ 00:30:13.737 END TEST bdev_json_nonenclosed 00:30:13.737 ************************************ 00:30:13.737 00:30:13.737 real 0m0.816s 00:30:13.737 user 0m0.559s 00:30:13.737 sys 0m0.155s 00:30:13.737 07:30:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:13.737 07:30:47 -- common/autotest_common.sh@10 -- # set +x 00:30:13.737 07:30:47 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:13.737 07:30:47 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:30:13.737 07:30:47 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:13.737 07:30:47 -- common/autotest_common.sh@10 -- # set +x 00:30:13.737 ************************************ 00:30:13.737 START TEST bdev_json_nonarray 00:30:13.737 ************************************ 00:30:13.737 07:30:47 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:30:13.737 [2024-02-13 07:30:47.287553] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:30:13.737 [2024-02-13 07:30:47.288000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144699 ] 00:30:13.996 [2024-02-13 07:30:47.454719] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.996 [2024-02-13 07:30:47.636808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.996 [2024-02-13 07:30:47.637244] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:30:13.996 [2024-02-13 07:30:47.637484] json_config.c: 604:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:30:13.996 [2024-02-13 07:30:47.637662] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:30:13.996 [2024-02-13 07:30:47.637742] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:13.996 [2024-02-13 07:30:47.637958] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:30:14.564 ************************************ 00:30:14.564 END TEST bdev_json_nonarray 00:30:14.564 ************************************ 00:30:14.564 00:30:14.564 real 0m0.763s 00:30:14.564 user 0m0.502s 00:30:14.564 sys 0m0.160s 00:30:14.564 07:30:47 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:14.564 07:30:47 -- common/autotest_common.sh@10 -- # set +x 00:30:14.564 07:30:48 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:30:14.564 07:30:48 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:30:14.564 07:30:48 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:30:14.564 07:30:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:14.564 07:30:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:14.564 07:30:48 -- common/autotest_common.sh@10 -- # set +x 00:30:14.564 ************************************ 00:30:14.564 START TEST bdev_gpt_uuid 00:30:14.564 ************************************ 00:30:14.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.564 07:30:48 -- common/autotest_common.sh@1102 -- # bdev_gpt_uuid 00:30:14.564 07:30:48 -- bdev/blockdev.sh@612 -- # local bdev 00:30:14.564 07:30:48 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:30:14.564 07:30:48 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=144738 00:30:14.564 07:30:48 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:14.564 07:30:48 -- bdev/blockdev.sh@47 -- # waitforlisten 144738 00:30:14.564 07:30:48 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:30:14.564 07:30:48 -- common/autotest_common.sh@817 -- # '[' -z 144738 ']' 00:30:14.564 07:30:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.564 07:30:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:14.564 07:30:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.564 07:30:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:14.564 07:30:48 -- common/autotest_common.sh@10 -- # set +x 00:30:14.564 [2024-02-13 07:30:48.123945] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:30:14.564 [2024-02-13 07:30:48.124457] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144738 ] 00:30:14.823 [2024-02-13 07:30:48.294218] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:14.823 [2024-02-13 07:30:48.471590] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:14.823 [2024-02-13 07:30:48.472111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.200 07:30:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:16.200 07:30:49 -- common/autotest_common.sh@850 -- # return 0 00:30:16.200 07:30:49 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:16.200 07:30:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.200 07:30:49 -- common/autotest_common.sh@10 -- # set +x 00:30:16.200 Some configs were skipped because the RPC state that can call them passed over. 00:30:16.200 07:30:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.200 07:30:49 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:30:16.200 07:30:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.200 07:30:49 -- common/autotest_common.sh@10 -- # set +x 00:30:16.200 07:30:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.200 07:30:49 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:30:16.200 07:30:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.200 07:30:49 -- common/autotest_common.sh@10 -- # set +x 00:30:16.459 07:30:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.459 07:30:49 -- bdev/blockdev.sh@619 -- # bdev='[ 00:30:16.459 { 00:30:16.459 "name": "Nvme0n1p1", 00:30:16.459 "aliases": [ 00:30:16.459 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:30:16.459 ], 00:30:16.459 "product_name": "GPT Disk", 00:30:16.459 "block_size": 4096, 00:30:16.459 "num_blocks": 655104, 00:30:16.460 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:16.460 "assigned_rate_limits": { 00:30:16.460 "rw_ios_per_sec": 0, 00:30:16.460 "rw_mbytes_per_sec": 0, 00:30:16.460 "r_mbytes_per_sec": 0, 00:30:16.460 "w_mbytes_per_sec": 0 00:30:16.460 }, 00:30:16.460 "claimed": false, 00:30:16.460 "zoned": false, 00:30:16.460 "supported_io_types": { 00:30:16.460 "read": true, 00:30:16.460 "write": true, 00:30:16.460 "unmap": true, 00:30:16.460 "write_zeroes": true, 00:30:16.460 "flush": true, 00:30:16.460 "reset": true, 00:30:16.460 "compare": true, 00:30:16.460 "compare_and_write": false, 00:30:16.460 "abort": true, 00:30:16.460 "nvme_admin": false, 00:30:16.460 "nvme_io": false 00:30:16.460 }, 00:30:16.460 "driver_specific": { 00:30:16.460 "gpt": { 00:30:16.460 "base_bdev": "Nvme0n1", 00:30:16.460 "offset_blocks": 256, 00:30:16.460 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:30:16.460 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:30:16.460 "partition_name": "SPDK_TEST_first" 00:30:16.460 } 00:30:16.460 } 00:30:16.460 } 00:30:16.460 ]' 00:30:16.460 07:30:49 -- bdev/blockdev.sh@620 -- # jq -r length 00:30:16.460 07:30:49 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:30:16.460 07:30:49 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:30:16.460 07:30:50 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:16.460 07:30:50 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:16.460 07:30:50 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:30:16.460 07:30:50 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:30:16.460 07:30:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:16.460 07:30:50 -- common/autotest_common.sh@10 -- # set +x 00:30:16.460 07:30:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:16.460 07:30:50 -- bdev/blockdev.sh@624 -- # bdev='[ 00:30:16.460 { 00:30:16.460 "name": "Nvme0n1p2", 00:30:16.460 "aliases": [ 00:30:16.460 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:30:16.460 ], 00:30:16.460 "product_name": "GPT Disk", 00:30:16.460 "block_size": 4096, 00:30:16.460 "num_blocks": 655103, 00:30:16.460 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:16.460 "assigned_rate_limits": { 00:30:16.460 "rw_ios_per_sec": 0, 00:30:16.460 "rw_mbytes_per_sec": 0, 00:30:16.460 "r_mbytes_per_sec": 0, 00:30:16.460 "w_mbytes_per_sec": 0 00:30:16.460 }, 00:30:16.460 "claimed": false, 00:30:16.460 "zoned": false, 00:30:16.460 "supported_io_types": { 00:30:16.460 "read": true, 00:30:16.460 "write": true, 00:30:16.460 "unmap": true, 00:30:16.460 "write_zeroes": true, 00:30:16.460 "flush": true, 00:30:16.460 "reset": true, 00:30:16.460 "compare": true, 00:30:16.460 "compare_and_write": false, 00:30:16.460 "abort": true, 00:30:16.460 "nvme_admin": false, 00:30:16.460 "nvme_io": false 00:30:16.460 }, 00:30:16.460 "driver_specific": { 00:30:16.460 "gpt": { 00:30:16.460 "base_bdev": "Nvme0n1", 00:30:16.460 "offset_blocks": 655360, 00:30:16.460 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:30:16.460 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:30:16.460 "partition_name": "SPDK_TEST_second" 00:30:16.460 } 00:30:16.460 } 00:30:16.460 } 00:30:16.460 ]' 00:30:16.460 07:30:50 -- bdev/blockdev.sh@625 -- # jq -r length 00:30:16.460 07:30:50 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:30:16.460 07:30:50 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:30:16.725 07:30:50 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:16.725 07:30:50 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:30:16.725 07:30:50 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:30:16.725 07:30:50 -- bdev/blockdev.sh@629 -- # killprocess 144738 00:30:16.725 07:30:50 -- common/autotest_common.sh@924 -- # '[' -z 144738 ']' 00:30:16.725 07:30:50 -- common/autotest_common.sh@928 -- # kill -0 144738 00:30:16.725 07:30:50 -- common/autotest_common.sh@929 -- # uname 00:30:16.725 07:30:50 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:30:16.725 07:30:50 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 144738 00:30:16.725 07:30:50 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:30:16.725 07:30:50 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:30:16.725 killing process with pid 144738 00:30:16.725 07:30:50 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 144738' 00:30:16.725 07:30:50 -- common/autotest_common.sh@943 -- # kill 144738 00:30:16.725 07:30:50 -- common/autotest_common.sh@948 -- # wait 144738 00:30:18.636 ************************************ 00:30:18.636 END TEST bdev_gpt_uuid 00:30:18.636 ************************************ 00:30:18.636 00:30:18.636 real 0m4.212s 00:30:18.636 user 0m4.621s 00:30:18.636 sys 0m0.553s 00:30:18.636 07:30:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:18.636 07:30:52 -- common/autotest_common.sh@10 -- # set +x 00:30:18.636 07:30:52 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:30:18.636 07:30:52 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:30:18.636 07:30:52 -- bdev/blockdev.sh@809 -- # cleanup 00:30:18.636 07:30:52 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:30:18.636 07:30:52 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:30:18.636 07:30:52 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:30:18.636 07:30:52 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:30:18.636 07:30:52 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:30:18.636 07:30:52 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:18.897 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:30:18.897 Waiting for block devices as requested 00:30:19.156 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:30:19.156 07:30:52 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme0n1 ]] 00:30:19.156 07:30:52 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme0n1 00:30:19.156 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:30:19.156 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:30:19.156 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:30:19.156 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:30:19.156 07:30:52 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:30:19.156 00:30:19.156 real 0m47.250s 00:30:19.156 user 1m7.784s 00:30:19.156 sys 0m6.497s 00:30:19.156 07:30:52 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:19.156 ************************************ 00:30:19.156 END TEST blockdev_nvme_gpt 00:30:19.156 ************************************ 00:30:19.156 07:30:52 -- common/autotest_common.sh@10 -- # set +x 00:30:19.156 07:30:52 -- spdk/autotest.sh@222 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:19.156 07:30:52 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:19.156 07:30:52 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:19.156 07:30:52 -- common/autotest_common.sh@10 -- # set +x 00:30:19.156 ************************************ 00:30:19.156 START TEST nvme 00:30:19.156 ************************************ 00:30:19.156 07:30:52 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:30:19.415 * Looking for test storage... 00:30:19.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:30:19.415 07:30:52 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:19.679 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:30:19.679 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:30:21.056 07:30:54 -- nvme/nvme.sh@79 -- # uname 00:30:21.056 07:30:54 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:30:21.056 07:30:54 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:30:21.056 07:30:54 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:30:21.056 07:30:54 -- common/autotest_common.sh@1056 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:30:21.056 07:30:54 -- common/autotest_common.sh@1042 -- # _randomize_va_space=2 00:30:21.056 07:30:54 -- common/autotest_common.sh@1043 -- # echo 0 00:30:21.056 07:30:54 -- common/autotest_common.sh@1045 -- # stubpid=145212 00:30:21.056 Waiting for stub to ready for secondary processes... 00:30:21.056 07:30:54 -- common/autotest_common.sh@1046 -- # echo Waiting for stub to ready for secondary processes... 00:30:21.056 07:30:54 -- common/autotest_common.sh@1047 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:21.056 07:30:54 -- common/autotest_common.sh@1049 -- # [[ -e /proc/145212 ]] 00:30:21.056 07:30:54 -- common/autotest_common.sh@1050 -- # sleep 1s 00:30:21.056 07:30:54 -- common/autotest_common.sh@1044 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:30:21.056 [2024-02-13 07:30:54.740547] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:30:21.056 [2024-02-13 07:30:54.740831] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.434 07:30:55 -- common/autotest_common.sh@1047 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:22.434 07:30:55 -- common/autotest_common.sh@1049 -- # [[ -e /proc/145212 ]] 00:30:22.434 07:30:55 -- common/autotest_common.sh@1050 -- # sleep 1s 00:30:22.693 [2024-02-13 07:30:56.253666] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:22.951 [2024-02-13 07:30:56.456606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:22.951 [2024-02-13 07:30:56.456755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.951 [2024-02-13 07:30:56.456753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:22.951 [2024-02-13 07:30:56.471734] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:30:22.951 [2024-02-13 07:30:56.481773] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:30:22.951 [2024-02-13 07:30:56.482540] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:30:23.210 07:30:56 -- common/autotest_common.sh@1047 -- # '[' -e /var/run/spdk_stub0 ']' 00:30:23.210 done. 00:30:23.210 07:30:56 -- common/autotest_common.sh@1052 -- # echo done. 00:30:23.210 07:30:56 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:23.210 07:30:56 -- common/autotest_common.sh@1075 -- # '[' 10 -le 1 ']' 00:30:23.210 07:30:56 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:23.210 07:30:56 -- common/autotest_common.sh@10 -- # set +x 00:30:23.210 ************************************ 00:30:23.210 START TEST nvme_reset 00:30:23.210 ************************************ 00:30:23.210 07:30:56 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:30:23.469 Initializing NVMe Controllers 00:30:23.469 Skipping QEMU NVMe SSD at 0000:00:06.0 00:30:23.469 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:30:23.469 00:30:23.469 real 0m0.289s 00:30:23.469 user 0m0.080s 00:30:23.469 sys 0m0.119s 00:30:23.469 07:30:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:23.469 07:30:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.469 ************************************ 00:30:23.469 END TEST nvme_reset 00:30:23.469 ************************************ 00:30:23.469 07:30:57 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:30:23.469 07:30:57 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:23.469 07:30:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:23.469 07:30:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.469 ************************************ 00:30:23.469 START TEST nvme_identify 00:30:23.469 ************************************ 00:30:23.469 07:30:57 -- common/autotest_common.sh@1102 -- # nvme_identify 00:30:23.469 07:30:57 -- nvme/nvme.sh@12 -- # bdfs=() 00:30:23.469 07:30:57 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:30:23.469 07:30:57 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:30:23.469 07:30:57 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:30:23.469 07:30:57 -- common/autotest_common.sh@1496 -- # bdfs=() 00:30:23.470 07:30:57 -- common/autotest_common.sh@1496 -- # local bdfs 00:30:23.470 07:30:57 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:23.470 07:30:57 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:23.470 07:30:57 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:30:23.470 07:30:57 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:30:23.470 07:30:57 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 00:30:23.470 07:30:57 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:30:23.729 [2024-02-13 07:30:57.327409] nvme_ctrlr.c:3471:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 145245 terminated unexpected 00:30:23.729 ===================================================== 00:30:23.729 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:23.729 ===================================================== 00:30:23.729 Controller Capabilities/Features 00:30:23.729 ================================ 00:30:23.729 Vendor ID: 1b36 00:30:23.729 Subsystem Vendor ID: 1af4 00:30:23.729 Serial Number: 12340 00:30:23.729 Model Number: QEMU NVMe Ctrl 00:30:23.729 Firmware Version: 8.0.0 00:30:23.729 Recommended Arb Burst: 6 00:30:23.729 IEEE OUI Identifier: 00 54 52 00:30:23.729 Multi-path I/O 00:30:23.729 May have multiple subsystem ports: No 00:30:23.729 May have multiple controllers: No 00:30:23.729 Associated with SR-IOV VF: No 00:30:23.729 Max Data Transfer Size: 524288 00:30:23.729 Max Number of Namespaces: 256 00:30:23.729 Max Number of I/O Queues: 64 00:30:23.729 NVMe Specification Version (VS): 1.4 00:30:23.729 NVMe Specification Version (Identify): 1.4 00:30:23.729 Maximum Queue Entries: 2048 00:30:23.729 Contiguous Queues Required: Yes 00:30:23.729 Arbitration Mechanisms Supported 00:30:23.729 Weighted Round Robin: Not Supported 00:30:23.729 Vendor Specific: Not Supported 00:30:23.729 Reset Timeout: 7500 ms 00:30:23.729 Doorbell Stride: 4 bytes 00:30:23.729 NVM Subsystem Reset: Not Supported 00:30:23.729 Command Sets Supported 00:30:23.729 NVM Command Set: Supported 00:30:23.729 Boot Partition: Not Supported 00:30:23.730 Memory Page Size Minimum: 4096 bytes 00:30:23.730 Memory Page Size Maximum: 65536 bytes 00:30:23.730 Persistent Memory Region: Not Supported 00:30:23.730 Optional Asynchronous Events Supported 00:30:23.730 Namespace Attribute Notices: Supported 00:30:23.730 Firmware Activation Notices: Not Supported 00:30:23.730 ANA Change Notices: Not Supported 00:30:23.730 PLE Aggregate Log Change Notices: Not Supported 00:30:23.730 LBA Status Info Alert Notices: Not Supported 00:30:23.730 EGE Aggregate Log Change Notices: Not Supported 00:30:23.730 Normal NVM Subsystem Shutdown event: Not Supported 00:30:23.730 Zone Descriptor Change Notices: Not Supported 00:30:23.730 Discovery Log Change Notices: Not Supported 00:30:23.730 Controller Attributes 00:30:23.730 128-bit Host Identifier: Not Supported 00:30:23.730 Non-Operational Permissive Mode: Not Supported 00:30:23.730 NVM Sets: Not Supported 00:30:23.730 Read Recovery Levels: Not Supported 00:30:23.730 Endurance Groups: Not Supported 00:30:23.730 Predictable Latency Mode: Not Supported 00:30:23.730 Traffic Based Keep ALive: Not Supported 00:30:23.730 Namespace Granularity: Not Supported 00:30:23.730 SQ Associations: Not Supported 00:30:23.730 UUID List: Not Supported 00:30:23.730 Multi-Domain Subsystem: Not Supported 00:30:23.730 Fixed Capacity Management: Not Supported 00:30:23.730 Variable Capacity Management: Not Supported 00:30:23.730 Delete Endurance Group: Not Supported 00:30:23.730 Delete NVM Set: Not Supported 00:30:23.730 Extended LBA Formats Supported: Supported 00:30:23.730 Flexible Data Placement Supported: Not Supported 00:30:23.730 00:30:23.730 Controller Memory Buffer Support 00:30:23.730 ================================ 00:30:23.730 Supported: No 00:30:23.730 00:30:23.730 Persistent Memory Region Support 00:30:23.730 ================================ 00:30:23.730 Supported: No 00:30:23.730 00:30:23.730 Admin Command Set Attributes 00:30:23.730 ============================ 00:30:23.730 Security Send/Receive: Not Supported 00:30:23.730 Format NVM: Supported 00:30:23.730 Firmware Activate/Download: Not Supported 00:30:23.730 Namespace Management: Supported 00:30:23.730 Device Self-Test: Not Supported 00:30:23.730 Directives: Supported 00:30:23.730 NVMe-MI: Not Supported 00:30:23.730 Virtualization Management: Not Supported 00:30:23.730 Doorbell Buffer Config: Supported 00:30:23.730 Get LBA Status Capability: Not Supported 00:30:23.730 Command & Feature Lockdown Capability: Not Supported 00:30:23.730 Abort Command Limit: 4 00:30:23.730 Async Event Request Limit: 4 00:30:23.730 Number of Firmware Slots: N/A 00:30:23.730 Firmware Slot 1 Read-Only: N/A 00:30:23.730 Firmware Activation Without Reset: N/A 00:30:23.730 Multiple Update Detection Support: N/A 00:30:23.730 Firmware Update Granularity: No Information Provided 00:30:23.730 Per-Namespace SMART Log: Yes 00:30:23.730 Asymmetric Namespace Access Log Page: Not Supported 00:30:23.730 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:23.730 Command Effects Log Page: Supported 00:30:23.730 Get Log Page Extended Data: Supported 00:30:23.730 Telemetry Log Pages: Not Supported 00:30:23.730 Persistent Event Log Pages: Not Supported 00:30:23.730 Supported Log Pages Log Page: May Support 00:30:23.730 Commands Supported & Effects Log Page: Not Supported 00:30:23.730 Feature Identifiers & Effects Log Page:May Support 00:30:23.730 NVMe-MI Commands & Effects Log Page: May Support 00:30:23.730 Data Area 4 for Telemetry Log: Not Supported 00:30:23.730 Error Log Page Entries Supported: 1 00:30:23.730 Keep Alive: Not Supported 00:30:23.730 00:30:23.730 NVM Command Set Attributes 00:30:23.730 ========================== 00:30:23.730 Submission Queue Entry Size 00:30:23.730 Max: 64 00:30:23.730 Min: 64 00:30:23.730 Completion Queue Entry Size 00:30:23.730 Max: 16 00:30:23.730 Min: 16 00:30:23.730 Number of Namespaces: 256 00:30:23.730 Compare Command: Supported 00:30:23.730 Write Uncorrectable Command: Not Supported 00:30:23.730 Dataset Management Command: Supported 00:30:23.730 Write Zeroes Command: Supported 00:30:23.730 Set Features Save Field: Supported 00:30:23.730 Reservations: Not Supported 00:30:23.730 Timestamp: Supported 00:30:23.730 Copy: Supported 00:30:23.730 Volatile Write Cache: Present 00:30:23.730 Atomic Write Unit (Normal): 1 00:30:23.730 Atomic Write Unit (PFail): 1 00:30:23.730 Atomic Compare & Write Unit: 1 00:30:23.730 Fused Compare & Write: Not Supported 00:30:23.730 Scatter-Gather List 00:30:23.730 SGL Command Set: Supported 00:30:23.730 SGL Keyed: Not Supported 00:30:23.730 SGL Bit Bucket Descriptor: Not Supported 00:30:23.730 SGL Metadata Pointer: Not Supported 00:30:23.730 Oversized SGL: Not Supported 00:30:23.730 SGL Metadata Address: Not Supported 00:30:23.730 SGL Offset: Not Supported 00:30:23.730 Transport SGL Data Block: Not Supported 00:30:23.730 Replay Protected Memory Block: Not Supported 00:30:23.730 00:30:23.730 Firmware Slot Information 00:30:23.730 ========================= 00:30:23.730 Active slot: 1 00:30:23.730 Slot 1 Firmware Revision: 1.0 00:30:23.730 00:30:23.730 00:30:23.730 Commands Supported and Effects 00:30:23.730 ============================== 00:30:23.730 Admin Commands 00:30:23.730 -------------- 00:30:23.730 Delete I/O Submission Queue (00h): Supported 00:30:23.730 Create I/O Submission Queue (01h): Supported 00:30:23.730 Get Log Page (02h): Supported 00:30:23.730 Delete I/O Completion Queue (04h): Supported 00:30:23.730 Create I/O Completion Queue (05h): Supported 00:30:23.730 Identify (06h): Supported 00:30:23.730 Abort (08h): Supported 00:30:23.730 Set Features (09h): Supported 00:30:23.730 Get Features (0Ah): Supported 00:30:23.730 Asynchronous Event Request (0Ch): Supported 00:30:23.730 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:23.730 Directive Send (19h): Supported 00:30:23.730 Directive Receive (1Ah): Supported 00:30:23.730 Virtualization Management (1Ch): Supported 00:30:23.730 Doorbell Buffer Config (7Ch): Supported 00:30:23.730 Format NVM (80h): Supported LBA-Change 00:30:23.730 I/O Commands 00:30:23.730 ------------ 00:30:23.730 Flush (00h): Supported LBA-Change 00:30:23.730 Write (01h): Supported LBA-Change 00:30:23.730 Read (02h): Supported 00:30:23.730 Compare (05h): Supported 00:30:23.730 Write Zeroes (08h): Supported LBA-Change 00:30:23.730 Dataset Management (09h): Supported LBA-Change 00:30:23.730 Unknown (0Ch): Supported 00:30:23.730 Unknown (12h): Supported 00:30:23.730 Copy (19h): Supported LBA-Change 00:30:23.730 Unknown (1Dh): Supported LBA-Change 00:30:23.730 00:30:23.730 Error Log 00:30:23.730 ========= 00:30:23.730 00:30:23.730 Arbitration 00:30:23.730 =========== 00:30:23.730 Arbitration Burst: no limit 00:30:23.730 00:30:23.730 Power Management 00:30:23.730 ================ 00:30:23.730 Number of Power States: 1 00:30:23.730 Current Power State: Power State #0 00:30:23.730 Power State #0: 00:30:23.730 Max Power: 25.00 W 00:30:23.730 Non-Operational State: Operational 00:30:23.730 Entry Latency: 16 microseconds 00:30:23.730 Exit Latency: 4 microseconds 00:30:23.730 Relative Read Throughput: 0 00:30:23.730 Relative Read Latency: 0 00:30:23.730 Relative Write Throughput: 0 00:30:23.730 Relative Write Latency: 0 00:30:23.730 Idle Power: Not Reported 00:30:23.730 Active Power: Not Reported 00:30:23.730 Non-Operational Permissive Mode: Not Supported 00:30:23.730 00:30:23.730 Health Information 00:30:23.730 ================== 00:30:23.730 Critical Warnings: 00:30:23.730 Available Spare Space: OK 00:30:23.730 Temperature: OK 00:30:23.730 Device Reliability: OK 00:30:23.730 Read Only: No 00:30:23.730 Volatile Memory Backup: OK 00:30:23.730 Current Temperature: 323 Kelvin (50 Celsius) 00:30:23.730 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:23.730 Available Spare: 0% 00:30:23.730 Available Spare Threshold: 0% 00:30:23.730 Life Percentage Used: 0% 00:30:23.730 Data Units Read: 8051 00:30:23.730 Data Units Written: 3924 00:30:23.730 Host Read Commands: 325436 00:30:23.730 Host Write Commands: 177958 00:30:23.730 Controller Busy Time: 0 minutes 00:30:23.730 Power Cycles: 0 00:30:23.730 Power On Hours: 0 hours 00:30:23.730 Unsafe Shutdowns: 0 00:30:23.730 Unrecoverable Media Errors: 0 00:30:23.730 Lifetime Error Log Entries: 0 00:30:23.730 Warning Temperature Time: 0 minutes 00:30:23.730 Critical Temperature Time: 0 minutes 00:30:23.730 00:30:23.730 Number of Queues 00:30:23.730 ================ 00:30:23.730 Number of I/O Submission Queues: 64 00:30:23.730 Number of I/O Completion Queues: 64 00:30:23.730 00:30:23.730 ZNS Specific Controller Data 00:30:23.730 ============================ 00:30:23.730 Zone Append Size Limit: 0 00:30:23.731 00:30:23.731 00:30:23.731 Active Namespaces 00:30:23.731 ================= 00:30:23.731 Namespace ID:1 00:30:23.731 Error Recovery Timeout: Unlimited 00:30:23.731 Command Set Identifier: NVM (00h) 00:30:23.731 Deallocate: Supported 00:30:23.731 Deallocated/Unwritten Error: Supported 00:30:23.731 Deallocated Read Value: All 0x00 00:30:23.731 Deallocate in Write Zeroes: Not Supported 00:30:23.731 Deallocated Guard Field: 0xFFFF 00:30:23.731 Flush: Supported 00:30:23.731 Reservation: Not Supported 00:30:23.731 Namespace Sharing Capabilities: Private 00:30:23.731 Size (in LBAs): 1310720 (5GiB) 00:30:23.731 Capacity (in LBAs): 1310720 (5GiB) 00:30:23.731 Utilization (in LBAs): 1310720 (5GiB) 00:30:23.731 Thin Provisioning: Not Supported 00:30:23.731 Per-NS Atomic Units: No 00:30:23.731 Maximum Single Source Range Length: 128 00:30:23.731 Maximum Copy Length: 128 00:30:23.731 Maximum Source Range Count: 128 00:30:23.731 NGUID/EUI64 Never Reused: No 00:30:23.731 Namespace Write Protected: No 00:30:23.731 Number of LBA Formats: 8 00:30:23.731 Current LBA Format: LBA Format #04 00:30:23.731 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:23.731 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:23.731 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:23.731 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:23.731 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:23.731 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:23.731 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:23.731 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:23.731 00:30:23.731 07:30:57 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:30:23.731 07:30:57 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:30:23.991 ===================================================== 00:30:23.991 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:23.991 ===================================================== 00:30:23.991 Controller Capabilities/Features 00:30:23.991 ================================ 00:30:23.991 Vendor ID: 1b36 00:30:23.991 Subsystem Vendor ID: 1af4 00:30:23.991 Serial Number: 12340 00:30:23.991 Model Number: QEMU NVMe Ctrl 00:30:23.991 Firmware Version: 8.0.0 00:30:23.991 Recommended Arb Burst: 6 00:30:23.991 IEEE OUI Identifier: 00 54 52 00:30:23.991 Multi-path I/O 00:30:23.991 May have multiple subsystem ports: No 00:30:23.991 May have multiple controllers: No 00:30:23.991 Associated with SR-IOV VF: No 00:30:23.991 Max Data Transfer Size: 524288 00:30:23.991 Max Number of Namespaces: 256 00:30:23.991 Max Number of I/O Queues: 64 00:30:23.991 NVMe Specification Version (VS): 1.4 00:30:23.991 NVMe Specification Version (Identify): 1.4 00:30:23.991 Maximum Queue Entries: 2048 00:30:23.991 Contiguous Queues Required: Yes 00:30:23.991 Arbitration Mechanisms Supported 00:30:23.991 Weighted Round Robin: Not Supported 00:30:23.991 Vendor Specific: Not Supported 00:30:23.991 Reset Timeout: 7500 ms 00:30:23.991 Doorbell Stride: 4 bytes 00:30:23.991 NVM Subsystem Reset: Not Supported 00:30:23.991 Command Sets Supported 00:30:23.991 NVM Command Set: Supported 00:30:23.991 Boot Partition: Not Supported 00:30:23.991 Memory Page Size Minimum: 4096 bytes 00:30:23.991 Memory Page Size Maximum: 65536 bytes 00:30:23.991 Persistent Memory Region: Not Supported 00:30:23.991 Optional Asynchronous Events Supported 00:30:23.991 Namespace Attribute Notices: Supported 00:30:23.991 Firmware Activation Notices: Not Supported 00:30:23.991 ANA Change Notices: Not Supported 00:30:23.991 PLE Aggregate Log Change Notices: Not Supported 00:30:23.991 LBA Status Info Alert Notices: Not Supported 00:30:23.991 EGE Aggregate Log Change Notices: Not Supported 00:30:23.991 Normal NVM Subsystem Shutdown event: Not Supported 00:30:23.991 Zone Descriptor Change Notices: Not Supported 00:30:23.991 Discovery Log Change Notices: Not Supported 00:30:23.991 Controller Attributes 00:30:23.991 128-bit Host Identifier: Not Supported 00:30:23.991 Non-Operational Permissive Mode: Not Supported 00:30:23.991 NVM Sets: Not Supported 00:30:23.991 Read Recovery Levels: Not Supported 00:30:23.991 Endurance Groups: Not Supported 00:30:23.991 Predictable Latency Mode: Not Supported 00:30:23.991 Traffic Based Keep ALive: Not Supported 00:30:23.991 Namespace Granularity: Not Supported 00:30:23.991 SQ Associations: Not Supported 00:30:23.991 UUID List: Not Supported 00:30:23.991 Multi-Domain Subsystem: Not Supported 00:30:23.991 Fixed Capacity Management: Not Supported 00:30:23.991 Variable Capacity Management: Not Supported 00:30:23.991 Delete Endurance Group: Not Supported 00:30:23.991 Delete NVM Set: Not Supported 00:30:23.991 Extended LBA Formats Supported: Supported 00:30:23.991 Flexible Data Placement Supported: Not Supported 00:30:23.991 00:30:23.991 Controller Memory Buffer Support 00:30:23.991 ================================ 00:30:23.991 Supported: No 00:30:23.991 00:30:23.991 Persistent Memory Region Support 00:30:23.991 ================================ 00:30:23.991 Supported: No 00:30:23.991 00:30:23.991 Admin Command Set Attributes 00:30:23.992 ============================ 00:30:23.992 Security Send/Receive: Not Supported 00:30:23.992 Format NVM: Supported 00:30:23.992 Firmware Activate/Download: Not Supported 00:30:23.992 Namespace Management: Supported 00:30:23.992 Device Self-Test: Not Supported 00:30:23.992 Directives: Supported 00:30:23.992 NVMe-MI: Not Supported 00:30:23.992 Virtualization Management: Not Supported 00:30:23.992 Doorbell Buffer Config: Supported 00:30:23.992 Get LBA Status Capability: Not Supported 00:30:23.992 Command & Feature Lockdown Capability: Not Supported 00:30:23.992 Abort Command Limit: 4 00:30:23.992 Async Event Request Limit: 4 00:30:23.992 Number of Firmware Slots: N/A 00:30:23.992 Firmware Slot 1 Read-Only: N/A 00:30:23.992 Firmware Activation Without Reset: N/A 00:30:23.992 Multiple Update Detection Support: N/A 00:30:23.992 Firmware Update Granularity: No Information Provided 00:30:23.992 Per-Namespace SMART Log: Yes 00:30:23.992 Asymmetric Namespace Access Log Page: Not Supported 00:30:23.992 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:30:23.992 Command Effects Log Page: Supported 00:30:23.992 Get Log Page Extended Data: Supported 00:30:23.992 Telemetry Log Pages: Not Supported 00:30:23.992 Persistent Event Log Pages: Not Supported 00:30:23.992 Supported Log Pages Log Page: May Support 00:30:23.992 Commands Supported & Effects Log Page: Not Supported 00:30:23.992 Feature Identifiers & Effects Log Page:May Support 00:30:23.992 NVMe-MI Commands & Effects Log Page: May Support 00:30:23.992 Data Area 4 for Telemetry Log: Not Supported 00:30:23.992 Error Log Page Entries Supported: 1 00:30:23.992 Keep Alive: Not Supported 00:30:23.992 00:30:23.992 NVM Command Set Attributes 00:30:23.992 ========================== 00:30:23.992 Submission Queue Entry Size 00:30:23.992 Max: 64 00:30:23.992 Min: 64 00:30:23.992 Completion Queue Entry Size 00:30:23.992 Max: 16 00:30:23.992 Min: 16 00:30:23.992 Number of Namespaces: 256 00:30:23.992 Compare Command: Supported 00:30:23.992 Write Uncorrectable Command: Not Supported 00:30:23.992 Dataset Management Command: Supported 00:30:23.992 Write Zeroes Command: Supported 00:30:23.992 Set Features Save Field: Supported 00:30:23.992 Reservations: Not Supported 00:30:23.992 Timestamp: Supported 00:30:23.992 Copy: Supported 00:30:23.992 Volatile Write Cache: Present 00:30:23.992 Atomic Write Unit (Normal): 1 00:30:23.992 Atomic Write Unit (PFail): 1 00:30:23.992 Atomic Compare & Write Unit: 1 00:30:23.992 Fused Compare & Write: Not Supported 00:30:23.992 Scatter-Gather List 00:30:23.992 SGL Command Set: Supported 00:30:23.992 SGL Keyed: Not Supported 00:30:23.992 SGL Bit Bucket Descriptor: Not Supported 00:30:23.992 SGL Metadata Pointer: Not Supported 00:30:23.992 Oversized SGL: Not Supported 00:30:23.992 SGL Metadata Address: Not Supported 00:30:23.992 SGL Offset: Not Supported 00:30:23.992 Transport SGL Data Block: Not Supported 00:30:23.992 Replay Protected Memory Block: Not Supported 00:30:23.992 00:30:23.992 Firmware Slot Information 00:30:23.992 ========================= 00:30:23.992 Active slot: 1 00:30:23.992 Slot 1 Firmware Revision: 1.0 00:30:23.992 00:30:23.992 00:30:23.992 Commands Supported and Effects 00:30:23.992 ============================== 00:30:23.992 Admin Commands 00:30:23.992 -------------- 00:30:23.992 Delete I/O Submission Queue (00h): Supported 00:30:23.992 Create I/O Submission Queue (01h): Supported 00:30:23.992 Get Log Page (02h): Supported 00:30:23.992 Delete I/O Completion Queue (04h): Supported 00:30:23.992 Create I/O Completion Queue (05h): Supported 00:30:23.992 Identify (06h): Supported 00:30:23.992 Abort (08h): Supported 00:30:23.992 Set Features (09h): Supported 00:30:23.992 Get Features (0Ah): Supported 00:30:23.992 Asynchronous Event Request (0Ch): Supported 00:30:23.992 Namespace Attachment (15h): Supported NS-Inventory-Change 00:30:23.992 Directive Send (19h): Supported 00:30:23.992 Directive Receive (1Ah): Supported 00:30:23.992 Virtualization Management (1Ch): Supported 00:30:23.992 Doorbell Buffer Config (7Ch): Supported 00:30:23.992 Format NVM (80h): Supported LBA-Change 00:30:23.992 I/O Commands 00:30:23.992 ------------ 00:30:23.992 Flush (00h): Supported LBA-Change 00:30:23.992 Write (01h): Supported LBA-Change 00:30:23.992 Read (02h): Supported 00:30:23.992 Compare (05h): Supported 00:30:23.992 Write Zeroes (08h): Supported LBA-Change 00:30:23.992 Dataset Management (09h): Supported LBA-Change 00:30:23.992 Unknown (0Ch): Supported 00:30:23.992 Unknown (12h): Supported 00:30:23.992 Copy (19h): Supported LBA-Change 00:30:23.992 Unknown (1Dh): Supported LBA-Change 00:30:23.992 00:30:23.992 Error Log 00:30:23.992 ========= 00:30:23.992 00:30:23.992 Arbitration 00:30:23.992 =========== 00:30:23.992 Arbitration Burst: no limit 00:30:23.992 00:30:23.992 Power Management 00:30:23.992 ================ 00:30:23.992 Number of Power States: 1 00:30:23.992 Current Power State: Power State #0 00:30:23.992 Power State #0: 00:30:23.992 Max Power: 25.00 W 00:30:23.992 Non-Operational State: Operational 00:30:23.992 Entry Latency: 16 microseconds 00:30:23.992 Exit Latency: 4 microseconds 00:30:23.992 Relative Read Throughput: 0 00:30:23.992 Relative Read Latency: 0 00:30:23.992 Relative Write Throughput: 0 00:30:23.992 Relative Write Latency: 0 00:30:23.992 Idle Power: Not Reported 00:30:23.992 Active Power: Not Reported 00:30:23.992 Non-Operational Permissive Mode: Not Supported 00:30:23.992 00:30:23.992 Health Information 00:30:23.992 ================== 00:30:23.992 Critical Warnings: 00:30:23.992 Available Spare Space: OK 00:30:23.992 Temperature: OK 00:30:23.992 Device Reliability: OK 00:30:23.992 Read Only: No 00:30:23.992 Volatile Memory Backup: OK 00:30:23.992 Current Temperature: 323 Kelvin (50 Celsius) 00:30:23.992 Temperature Threshold: 343 Kelvin (70 Celsius) 00:30:23.992 Available Spare: 0% 00:30:23.992 Available Spare Threshold: 0% 00:30:23.992 Life Percentage Used: 0% 00:30:23.992 Data Units Read: 8051 00:30:23.992 Data Units Written: 3924 00:30:23.992 Host Read Commands: 325436 00:30:23.992 Host Write Commands: 177958 00:30:23.992 Controller Busy Time: 0 minutes 00:30:23.992 Power Cycles: 0 00:30:23.992 Power On Hours: 0 hours 00:30:23.992 Unsafe Shutdowns: 0 00:30:23.992 Unrecoverable Media Errors: 0 00:30:23.992 Lifetime Error Log Entries: 0 00:30:23.992 Warning Temperature Time: 0 minutes 00:30:23.992 Critical Temperature Time: 0 minutes 00:30:23.992 00:30:23.992 Number of Queues 00:30:23.992 ================ 00:30:23.992 Number of I/O Submission Queues: 64 00:30:23.992 Number of I/O Completion Queues: 64 00:30:23.992 00:30:23.992 ZNS Specific Controller Data 00:30:23.992 ============================ 00:30:23.992 Zone Append Size Limit: 0 00:30:23.992 00:30:23.992 00:30:23.992 Active Namespaces 00:30:23.992 ================= 00:30:23.992 Namespace ID:1 00:30:23.992 Error Recovery Timeout: Unlimited 00:30:23.992 Command Set Identifier: NVM (00h) 00:30:23.992 Deallocate: Supported 00:30:23.992 Deallocated/Unwritten Error: Supported 00:30:23.992 Deallocated Read Value: All 0x00 00:30:23.992 Deallocate in Write Zeroes: Not Supported 00:30:23.992 Deallocated Guard Field: 0xFFFF 00:30:23.992 Flush: Supported 00:30:23.992 Reservation: Not Supported 00:30:23.992 Namespace Sharing Capabilities: Private 00:30:23.992 Size (in LBAs): 1310720 (5GiB) 00:30:23.992 Capacity (in LBAs): 1310720 (5GiB) 00:30:23.992 Utilization (in LBAs): 1310720 (5GiB) 00:30:23.992 Thin Provisioning: Not Supported 00:30:23.992 Per-NS Atomic Units: No 00:30:23.992 Maximum Single Source Range Length: 128 00:30:23.992 Maximum Copy Length: 128 00:30:23.992 Maximum Source Range Count: 128 00:30:23.993 NGUID/EUI64 Never Reused: No 00:30:23.993 Namespace Write Protected: No 00:30:23.993 Number of LBA Formats: 8 00:30:23.993 Current LBA Format: LBA Format #04 00:30:23.993 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:23.993 LBA Format #01: Data Size: 512 Metadata Size: 8 00:30:23.993 LBA Format #02: Data Size: 512 Metadata Size: 16 00:30:23.993 LBA Format #03: Data Size: 512 Metadata Size: 64 00:30:23.993 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:30:23.993 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:30:23.993 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:30:23.993 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:30:23.993 00:30:23.993 00:30:23.993 real 0m0.617s 00:30:23.993 user 0m0.268s 00:30:23.993 sys 0m0.259s 00:30:23.993 07:30:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:23.993 07:30:57 -- common/autotest_common.sh@10 -- # set +x 00:30:23.993 ************************************ 00:30:23.993 END TEST nvme_identify 00:30:23.993 ************************************ 00:30:24.252 07:30:57 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:30:24.252 07:30:57 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:24.252 07:30:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:24.252 07:30:57 -- common/autotest_common.sh@10 -- # set +x 00:30:24.252 ************************************ 00:30:24.252 START TEST nvme_perf 00:30:24.252 ************************************ 00:30:24.252 07:30:57 -- common/autotest_common.sh@1102 -- # nvme_perf 00:30:24.252 07:30:57 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:30:25.630 Initializing NVMe Controllers 00:30:25.630 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:25.630 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:25.630 Initialization complete. Launching workers. 00:30:25.630 ======================================================== 00:30:25.630 Latency(us) 00:30:25.630 Device Information : IOPS MiB/s Average min max 00:30:25.630 PCIE (0000:00:06.0) NSID 1 from core 0: 55938.00 655.52 2287.83 870.09 7091.63 00:30:25.630 ======================================================== 00:30:25.630 Total : 55938.00 655.52 2287.83 870.09 7091.63 00:30:25.630 00:30:25.630 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:25.630 ================================================================================= 00:30:25.630 1.00000% : 1444.771us 00:30:25.630 10.00000% : 1623.505us 00:30:25.630 25.00000% : 1869.265us 00:30:25.630 50.00000% : 2278.865us 00:30:25.630 75.00000% : 2681.018us 00:30:25.630 90.00000% : 2934.225us 00:30:25.630 95.00000% : 3068.276us 00:30:25.630 98.00000% : 3366.167us 00:30:25.630 99.00000% : 3544.902us 00:30:25.630 99.50000% : 3664.058us 00:30:25.630 99.90000% : 5213.091us 00:30:25.630 99.99000% : 6881.280us 00:30:25.630 99.99900% : 7119.593us 00:30:25.630 99.99990% : 7119.593us 00:30:25.630 99.99999% : 7119.593us 00:30:25.630 00:30:25.630 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:25.630 ============================================================================== 00:30:25.630 Range in us Cumulative IO count 00:30:25.630 867.607 - 871.331: 0.0018% ( 1) 00:30:25.630 882.502 - 886.225: 0.0036% ( 1) 00:30:25.630 1258.589 - 1266.036: 0.0054% ( 1) 00:30:25.630 1273.484 - 1280.931: 0.0072% ( 1) 00:30:25.630 1280.931 - 1288.378: 0.0089% ( 1) 00:30:25.630 1295.825 - 1303.273: 0.0107% ( 1) 00:30:25.630 1303.273 - 1310.720: 0.0125% ( 1) 00:30:25.630 1318.167 - 1325.615: 0.0179% ( 3) 00:30:25.630 1325.615 - 1333.062: 0.0268% ( 5) 00:30:25.630 1333.062 - 1340.509: 0.0322% ( 3) 00:30:25.630 1340.509 - 1347.956: 0.0358% ( 2) 00:30:25.630 1347.956 - 1355.404: 0.0411% ( 3) 00:30:25.630 1355.404 - 1362.851: 0.0626% ( 12) 00:30:25.630 1362.851 - 1370.298: 0.0894% ( 15) 00:30:25.630 1370.298 - 1377.745: 0.1287% ( 22) 00:30:25.630 1377.745 - 1385.193: 0.1716% ( 24) 00:30:25.630 1385.193 - 1392.640: 0.2235% ( 29) 00:30:25.630 1392.640 - 1400.087: 0.3057% ( 46) 00:30:25.630 1400.087 - 1407.535: 0.3969% ( 51) 00:30:25.630 1407.535 - 1414.982: 0.4880% ( 51) 00:30:25.630 1414.982 - 1422.429: 0.6060% ( 66) 00:30:25.630 1422.429 - 1429.876: 0.7526% ( 82) 00:30:25.630 1429.876 - 1437.324: 0.9403% ( 105) 00:30:25.630 1437.324 - 1444.771: 1.1191% ( 100) 00:30:25.630 1444.771 - 1452.218: 1.3408% ( 124) 00:30:25.630 1452.218 - 1459.665: 1.5624% ( 124) 00:30:25.630 1459.665 - 1467.113: 1.8467% ( 159) 00:30:25.630 1467.113 - 1474.560: 2.1256% ( 156) 00:30:25.630 1474.560 - 1482.007: 2.4241% ( 167) 00:30:25.630 1482.007 - 1489.455: 2.7459% ( 180) 00:30:25.630 1489.455 - 1496.902: 3.0784% ( 186) 00:30:25.630 1496.902 - 1504.349: 3.3966% ( 178) 00:30:25.630 1504.349 - 1511.796: 3.7506% ( 198) 00:30:25.630 1511.796 - 1519.244: 4.1474% ( 222) 00:30:25.630 1519.244 - 1526.691: 4.4960% ( 195) 00:30:25.630 1526.691 - 1534.138: 4.9054% ( 229) 00:30:25.630 1534.138 - 1541.585: 5.3291% ( 237) 00:30:25.630 1541.585 - 1549.033: 5.7224% ( 220) 00:30:25.630 1549.033 - 1556.480: 6.1175% ( 221) 00:30:25.630 1556.480 - 1563.927: 6.5197% ( 225) 00:30:25.630 1563.927 - 1571.375: 6.9720% ( 253) 00:30:25.630 1571.375 - 1578.822: 7.4064% ( 243) 00:30:25.630 1578.822 - 1586.269: 7.8426% ( 244) 00:30:25.630 1586.269 - 1593.716: 8.2824% ( 246) 00:30:25.630 1593.716 - 1601.164: 8.7525% ( 263) 00:30:25.630 1601.164 - 1608.611: 9.1959% ( 248) 00:30:25.630 1608.611 - 1616.058: 9.6375% ( 247) 00:30:25.630 1616.058 - 1623.505: 10.1094% ( 264) 00:30:25.630 1623.505 - 1630.953: 10.5295% ( 235) 00:30:25.630 1630.953 - 1638.400: 10.9818% ( 253) 00:30:25.630 1638.400 - 1645.847: 11.4412% ( 257) 00:30:25.630 1645.847 - 1653.295: 11.9132% ( 264) 00:30:25.630 1653.295 - 1660.742: 12.3601% ( 250) 00:30:25.630 1660.742 - 1668.189: 12.8231% ( 259) 00:30:25.630 1668.189 - 1675.636: 13.2772% ( 254) 00:30:25.630 1675.636 - 1683.084: 13.7313% ( 254) 00:30:25.630 1683.084 - 1690.531: 14.1675% ( 244) 00:30:25.630 1690.531 - 1697.978: 14.6358% ( 262) 00:30:25.630 1697.978 - 1705.425: 15.0720% ( 244) 00:30:25.630 1705.425 - 1712.873: 15.5172% ( 249) 00:30:25.630 1712.873 - 1720.320: 15.9838% ( 261) 00:30:25.630 1720.320 - 1727.767: 16.4200% ( 244) 00:30:25.630 1727.767 - 1735.215: 16.9152% ( 277) 00:30:25.630 1735.215 - 1742.662: 17.3585% ( 248) 00:30:25.630 1742.662 - 1750.109: 17.8376% ( 268) 00:30:25.630 1750.109 - 1757.556: 18.3185% ( 269) 00:30:25.630 1757.556 - 1765.004: 18.7511% ( 242) 00:30:25.630 1765.004 - 1772.451: 19.2159% ( 260) 00:30:25.630 1772.451 - 1779.898: 19.6807% ( 260) 00:30:25.630 1779.898 - 1787.345: 20.1276% ( 250) 00:30:25.630 1787.345 - 1794.793: 20.5960% ( 262) 00:30:25.630 1794.793 - 1802.240: 21.0680% ( 264) 00:30:25.630 1802.240 - 1809.687: 21.5167% ( 251) 00:30:25.630 1809.687 - 1817.135: 21.9886% ( 264) 00:30:25.630 1817.135 - 1824.582: 22.4409% ( 253) 00:30:25.630 1824.582 - 1832.029: 22.9254% ( 271) 00:30:25.631 1832.029 - 1839.476: 23.3741% ( 251) 00:30:25.631 1839.476 - 1846.924: 23.8496% ( 266) 00:30:25.631 1846.924 - 1854.371: 24.3162% ( 261) 00:30:25.631 1854.371 - 1861.818: 24.7721% ( 255) 00:30:25.631 1861.818 - 1869.265: 25.2065% ( 243) 00:30:25.631 1869.265 - 1876.713: 25.6909% ( 271) 00:30:25.631 1876.713 - 1884.160: 26.1397% ( 251) 00:30:25.631 1884.160 - 1891.607: 26.6188% ( 268) 00:30:25.631 1891.607 - 1899.055: 27.0907% ( 264) 00:30:25.631 1899.055 - 1906.502: 27.5251% ( 243) 00:30:25.631 1906.502 - 1921.396: 28.4654% ( 526) 00:30:25.631 1921.396 - 1936.291: 29.4058% ( 526) 00:30:25.631 1936.291 - 1951.185: 30.2996% ( 500) 00:30:25.631 1951.185 - 1966.080: 31.2310% ( 521) 00:30:25.631 1966.080 - 1980.975: 32.1660% ( 523) 00:30:25.631 1980.975 - 1995.869: 33.0866% ( 515) 00:30:25.631 1995.869 - 2010.764: 34.0091% ( 516) 00:30:25.631 2010.764 - 2025.658: 34.9476% ( 525) 00:30:25.631 2025.658 - 2040.553: 35.8558% ( 508) 00:30:25.631 2040.553 - 2055.447: 36.7729% ( 513) 00:30:25.631 2055.447 - 2070.342: 37.6667% ( 500) 00:30:25.631 2070.342 - 2085.236: 38.5874% ( 515) 00:30:25.631 2085.236 - 2100.131: 39.4830% ( 501) 00:30:25.631 2100.131 - 2115.025: 40.4037% ( 515) 00:30:25.631 2115.025 - 2129.920: 41.3440% ( 526) 00:30:25.631 2129.920 - 2144.815: 42.2539% ( 509) 00:30:25.631 2144.815 - 2159.709: 43.1746% ( 515) 00:30:25.631 2159.709 - 2174.604: 44.1417% ( 541) 00:30:25.631 2174.604 - 2189.498: 45.0445% ( 505) 00:30:25.631 2189.498 - 2204.393: 45.9741% ( 520) 00:30:25.631 2204.393 - 2219.287: 46.8984% ( 517) 00:30:25.631 2219.287 - 2234.182: 47.8262% ( 519) 00:30:25.631 2234.182 - 2249.076: 48.7468% ( 515) 00:30:25.631 2249.076 - 2263.971: 49.6442% ( 502) 00:30:25.631 2263.971 - 2278.865: 50.5828% ( 525) 00:30:25.631 2278.865 - 2293.760: 51.5017% ( 514) 00:30:25.631 2293.760 - 2308.655: 52.4384% ( 524) 00:30:25.631 2308.655 - 2323.549: 53.3573% ( 514) 00:30:25.631 2323.549 - 2338.444: 54.2797% ( 516) 00:30:25.631 2338.444 - 2353.338: 55.2183% ( 525) 00:30:25.631 2353.338 - 2368.233: 56.1515% ( 522) 00:30:25.631 2368.233 - 2383.127: 57.0596% ( 508) 00:30:25.631 2383.127 - 2398.022: 57.9642% ( 506) 00:30:25.631 2398.022 - 2412.916: 58.9099% ( 529) 00:30:25.631 2412.916 - 2427.811: 59.7894% ( 492) 00:30:25.631 2427.811 - 2442.705: 60.7208% ( 521) 00:30:25.631 2442.705 - 2457.600: 61.6343% ( 511) 00:30:25.631 2457.600 - 2472.495: 62.5425% ( 508) 00:30:25.631 2472.495 - 2487.389: 63.4649% ( 516) 00:30:25.631 2487.389 - 2502.284: 64.4088% ( 528) 00:30:25.631 2502.284 - 2517.178: 65.3259% ( 513) 00:30:25.631 2517.178 - 2532.073: 66.2537% ( 519) 00:30:25.631 2532.073 - 2546.967: 67.1386% ( 495) 00:30:25.631 2546.967 - 2561.862: 68.0825% ( 528) 00:30:25.631 2561.862 - 2576.756: 68.9817% ( 503) 00:30:25.631 2576.756 - 2591.651: 69.9006% ( 514) 00:30:25.631 2591.651 - 2606.545: 70.8266% ( 518) 00:30:25.631 2606.545 - 2621.440: 71.7187% ( 499) 00:30:25.631 2621.440 - 2636.335: 72.6340% ( 512) 00:30:25.631 2636.335 - 2651.229: 73.5761% ( 527) 00:30:25.631 2651.229 - 2666.124: 74.4950% ( 514) 00:30:25.631 2666.124 - 2681.018: 75.4031% ( 508) 00:30:25.631 2681.018 - 2695.913: 76.3256% ( 516) 00:30:25.631 2695.913 - 2710.807: 77.2462% ( 515) 00:30:25.631 2710.807 - 2725.702: 78.1740% ( 519) 00:30:25.631 2725.702 - 2740.596: 79.0840% ( 509) 00:30:25.631 2740.596 - 2755.491: 79.9903% ( 507) 00:30:25.631 2755.491 - 2770.385: 80.9199% ( 520) 00:30:25.631 2770.385 - 2785.280: 81.8424% ( 516) 00:30:25.631 2785.280 - 2800.175: 82.7845% ( 527) 00:30:25.631 2800.175 - 2815.069: 83.7016% ( 513) 00:30:25.631 2815.069 - 2829.964: 84.6008% ( 503) 00:30:25.631 2829.964 - 2844.858: 85.5233% ( 516) 00:30:25.631 2844.858 - 2859.753: 86.4189% ( 501) 00:30:25.631 2859.753 - 2874.647: 87.3110% ( 499) 00:30:25.631 2874.647 - 2889.542: 88.1833% ( 488) 00:30:25.631 2889.542 - 2904.436: 89.0039% ( 459) 00:30:25.631 2904.436 - 2919.331: 89.8048% ( 448) 00:30:25.631 2919.331 - 2934.225: 90.5717% ( 429) 00:30:25.631 2934.225 - 2949.120: 91.2814% ( 397) 00:30:25.631 2949.120 - 2964.015: 91.9286% ( 362) 00:30:25.631 2964.015 - 2978.909: 92.5471% ( 346) 00:30:25.631 2978.909 - 2993.804: 93.0924% ( 305) 00:30:25.631 2993.804 - 3008.698: 93.5715% ( 268) 00:30:25.631 3008.698 - 3023.593: 94.0220% ( 252) 00:30:25.631 3023.593 - 3038.487: 94.4278% ( 227) 00:30:25.631 3038.487 - 3053.382: 94.7692% ( 191) 00:30:25.631 3053.382 - 3068.276: 95.0964% ( 183) 00:30:25.631 3068.276 - 3083.171: 95.3663% ( 151) 00:30:25.631 3083.171 - 3098.065: 95.5933% ( 127) 00:30:25.631 3098.065 - 3112.960: 95.8043% ( 118) 00:30:25.631 3112.960 - 3127.855: 96.0009% ( 110) 00:30:25.631 3127.855 - 3142.749: 96.1725% ( 96) 00:30:25.631 3142.749 - 3157.644: 96.3424% ( 95) 00:30:25.631 3157.644 - 3172.538: 96.4997% ( 88) 00:30:25.631 3172.538 - 3187.433: 96.6320% ( 74) 00:30:25.631 3187.433 - 3202.327: 96.7732% ( 79) 00:30:25.631 3202.327 - 3217.222: 96.9037% ( 73) 00:30:25.631 3217.222 - 3232.116: 97.0235% ( 67) 00:30:25.631 3232.116 - 3247.011: 97.1522% ( 72) 00:30:25.631 3247.011 - 3261.905: 97.2684% ( 65) 00:30:25.631 3261.905 - 3276.800: 97.3846% ( 65) 00:30:25.631 3276.800 - 3291.695: 97.5026% ( 66) 00:30:25.631 3291.695 - 3306.589: 97.6170% ( 64) 00:30:25.631 3306.589 - 3321.484: 97.7332% ( 65) 00:30:25.631 3321.484 - 3336.378: 97.8280% ( 53) 00:30:25.631 3336.378 - 3351.273: 97.9299% ( 57) 00:30:25.631 3351.273 - 3366.167: 98.0264% ( 54) 00:30:25.631 3366.167 - 3381.062: 98.1211% ( 53) 00:30:25.631 3381.062 - 3395.956: 98.2195% ( 55) 00:30:25.631 3395.956 - 3410.851: 98.3196% ( 56) 00:30:25.631 3410.851 - 3425.745: 98.4107% ( 51) 00:30:25.631 3425.745 - 3440.640: 98.4965% ( 48) 00:30:25.631 3440.640 - 3455.535: 98.5841% ( 49) 00:30:25.631 3455.535 - 3470.429: 98.6682% ( 47) 00:30:25.631 3470.429 - 3485.324: 98.7593% ( 51) 00:30:25.631 3485.324 - 3500.218: 98.8380% ( 44) 00:30:25.631 3500.218 - 3515.113: 98.9095% ( 40) 00:30:25.631 3515.113 - 3530.007: 98.9846% ( 42) 00:30:25.631 3530.007 - 3544.902: 99.0579% ( 41) 00:30:25.631 3544.902 - 3559.796: 99.1294% ( 40) 00:30:25.631 3559.796 - 3574.691: 99.1884% ( 33) 00:30:25.631 3574.691 - 3589.585: 99.2527% ( 36) 00:30:25.631 3589.585 - 3604.480: 99.3153% ( 35) 00:30:25.631 3604.480 - 3619.375: 99.3743% ( 33) 00:30:25.631 3619.375 - 3634.269: 99.4297% ( 31) 00:30:25.631 3634.269 - 3649.164: 99.4798% ( 28) 00:30:25.631 3649.164 - 3664.058: 99.5191% ( 22) 00:30:25.631 3664.058 - 3678.953: 99.5584% ( 22) 00:30:25.631 3678.953 - 3693.847: 99.5906% ( 18) 00:30:25.631 3693.847 - 3708.742: 99.6210% ( 17) 00:30:25.631 3708.742 - 3723.636: 99.6425% ( 12) 00:30:25.631 3723.636 - 3738.531: 99.6586% ( 9) 00:30:25.631 3738.531 - 3753.425: 99.6729% ( 8) 00:30:25.631 3753.425 - 3768.320: 99.6889% ( 9) 00:30:25.631 3768.320 - 3783.215: 99.6961% ( 4) 00:30:25.631 3783.215 - 3798.109: 99.7068% ( 6) 00:30:25.631 3798.109 - 3813.004: 99.7158% ( 5) 00:30:25.631 3813.004 - 3842.793: 99.7283% ( 7) 00:30:25.631 3842.793 - 3872.582: 99.7390% ( 6) 00:30:25.631 3872.582 - 3902.371: 99.7497% ( 6) 00:30:25.631 3902.371 - 3932.160: 99.7587% ( 5) 00:30:25.631 3932.160 - 3961.949: 99.7640% ( 3) 00:30:25.631 3961.949 - 3991.738: 99.7694% ( 3) 00:30:25.631 3991.738 - 4021.527: 99.7730% ( 2) 00:30:25.631 4021.527 - 4051.316: 99.7783% ( 3) 00:30:25.631 4051.316 - 4081.105: 99.7837% ( 3) 00:30:25.631 4081.105 - 4110.895: 99.7873% ( 2) 00:30:25.631 4110.895 - 4140.684: 99.7926% ( 3) 00:30:25.631 4140.684 - 4170.473: 99.7980% ( 3) 00:30:25.631 4170.473 - 4200.262: 99.8016% ( 2) 00:30:25.631 4200.262 - 4230.051: 99.8069% ( 3) 00:30:25.631 4230.051 - 4259.840: 99.8123% ( 3) 00:30:25.631 4259.840 - 4289.629: 99.8177% ( 3) 00:30:25.631 4289.629 - 4319.418: 99.8230% ( 3) 00:30:25.631 4319.418 - 4349.207: 99.8266% ( 2) 00:30:25.631 4349.207 - 4378.996: 99.8320% ( 3) 00:30:25.631 4378.996 - 4408.785: 99.8373% ( 3) 00:30:25.631 4408.785 - 4438.575: 99.8427% ( 3) 00:30:25.631 4438.575 - 4468.364: 99.8480% ( 3) 00:30:25.631 4468.364 - 4498.153: 99.8516% ( 2) 00:30:25.631 4498.153 - 4527.942: 99.8552% ( 2) 00:30:25.631 4527.942 - 4557.731: 99.8606% ( 3) 00:30:25.631 4557.731 - 4587.520: 99.8659% ( 3) 00:30:25.631 4587.520 - 4617.309: 99.8677% ( 1) 00:30:25.631 4617.309 - 4647.098: 99.8695% ( 1) 00:30:25.632 4647.098 - 4676.887: 99.8713% ( 1) 00:30:25.632 4676.887 - 4706.676: 99.8731% ( 1) 00:30:25.632 4706.676 - 4736.465: 99.8749% ( 1) 00:30:25.632 4736.465 - 4766.255: 99.8766% ( 1) 00:30:25.632 4766.255 - 4796.044: 99.8784% ( 1) 00:30:25.632 4796.044 - 4825.833: 99.8802% ( 1) 00:30:25.632 4825.833 - 4855.622: 99.8820% ( 1) 00:30:25.632 4855.622 - 4885.411: 99.8838% ( 1) 00:30:25.632 4885.411 - 4915.200: 99.8856% ( 1) 00:30:25.632 4944.989 - 4974.778: 99.8874% ( 1) 00:30:25.632 4974.778 - 5004.567: 99.8892% ( 1) 00:30:25.632 5004.567 - 5034.356: 99.8910% ( 1) 00:30:25.632 5034.356 - 5064.145: 99.8927% ( 1) 00:30:25.632 5064.145 - 5093.935: 99.8945% ( 1) 00:30:25.632 5093.935 - 5123.724: 99.8963% ( 1) 00:30:25.632 5123.724 - 5153.513: 99.8981% ( 1) 00:30:25.632 5153.513 - 5183.302: 99.8999% ( 1) 00:30:25.632 5183.302 - 5213.091: 99.9017% ( 1) 00:30:25.632 5213.091 - 5242.880: 99.9035% ( 1) 00:30:25.632 5242.880 - 5272.669: 99.9053% ( 1) 00:30:25.632 5302.458 - 5332.247: 99.9070% ( 1) 00:30:25.632 5332.247 - 5362.036: 99.9088% ( 1) 00:30:25.632 5362.036 - 5391.825: 99.9106% ( 1) 00:30:25.632 5391.825 - 5421.615: 99.9124% ( 1) 00:30:25.632 5421.615 - 5451.404: 99.9142% ( 1) 00:30:25.632 5451.404 - 5481.193: 99.9160% ( 1) 00:30:25.632 5481.193 - 5510.982: 99.9178% ( 1) 00:30:25.632 5510.982 - 5540.771: 99.9196% ( 1) 00:30:25.632 5540.771 - 5570.560: 99.9213% ( 1) 00:30:25.632 5600.349 - 5630.138: 99.9231% ( 1) 00:30:25.632 5630.138 - 5659.927: 99.9249% ( 1) 00:30:25.632 5659.927 - 5689.716: 99.9267% ( 1) 00:30:25.632 5689.716 - 5719.505: 99.9285% ( 1) 00:30:25.632 5719.505 - 5749.295: 99.9303% ( 1) 00:30:25.632 5749.295 - 5779.084: 99.9321% ( 1) 00:30:25.632 5779.084 - 5808.873: 99.9339% ( 1) 00:30:25.632 5808.873 - 5838.662: 99.9356% ( 1) 00:30:25.632 5838.662 - 5868.451: 99.9374% ( 1) 00:30:25.632 5868.451 - 5898.240: 99.9392% ( 1) 00:30:25.632 5898.240 - 5928.029: 99.9410% ( 1) 00:30:25.632 5928.029 - 5957.818: 99.9428% ( 1) 00:30:25.632 5957.818 - 5987.607: 99.9446% ( 1) 00:30:25.632 6017.396 - 6047.185: 99.9464% ( 1) 00:30:25.632 6047.185 - 6076.975: 99.9482% ( 1) 00:30:25.632 6076.975 - 6106.764: 99.9499% ( 1) 00:30:25.632 6106.764 - 6136.553: 99.9517% ( 1) 00:30:25.632 6136.553 - 6166.342: 99.9535% ( 1) 00:30:25.632 6166.342 - 6196.131: 99.9553% ( 1) 00:30:25.632 6196.131 - 6225.920: 99.9571% ( 1) 00:30:25.632 6225.920 - 6255.709: 99.9589% ( 1) 00:30:25.632 6255.709 - 6285.498: 99.9607% ( 1) 00:30:25.632 6285.498 - 6315.287: 99.9625% ( 1) 00:30:25.632 6315.287 - 6345.076: 99.9642% ( 1) 00:30:25.632 6345.076 - 6374.865: 99.9660% ( 1) 00:30:25.632 6404.655 - 6434.444: 99.9678% ( 1) 00:30:25.632 6434.444 - 6464.233: 99.9696% ( 1) 00:30:25.632 6464.233 - 6494.022: 99.9714% ( 1) 00:30:25.632 6523.811 - 6553.600: 99.9732% ( 1) 00:30:25.632 6553.600 - 6583.389: 99.9750% ( 1) 00:30:25.632 6583.389 - 6613.178: 99.9768% ( 1) 00:30:25.632 6613.178 - 6642.967: 99.9785% ( 1) 00:30:25.632 6642.967 - 6672.756: 99.9803% ( 1) 00:30:25.632 6672.756 - 6702.545: 99.9821% ( 1) 00:30:25.632 6702.545 - 6732.335: 99.9839% ( 1) 00:30:25.632 6732.335 - 6762.124: 99.9857% ( 1) 00:30:25.632 6791.913 - 6821.702: 99.9875% ( 1) 00:30:25.632 6821.702 - 6851.491: 99.9893% ( 1) 00:30:25.632 6851.491 - 6881.280: 99.9911% ( 1) 00:30:25.632 6881.280 - 6911.069: 99.9928% ( 1) 00:30:25.632 6911.069 - 6940.858: 99.9946% ( 1) 00:30:25.632 6940.858 - 6970.647: 99.9964% ( 1) 00:30:25.632 7000.436 - 7030.225: 99.9982% ( 1) 00:30:25.632 7089.804 - 7119.593: 100.0000% ( 1) 00:30:25.632 00:30:25.632 07:30:59 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:30:27.007 Initializing NVMe Controllers 00:30:27.007 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:27.007 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:27.007 Initialization complete. Launching workers. 00:30:27.007 ======================================================== 00:30:27.007 Latency(us) 00:30:27.007 Device Information : IOPS MiB/s Average min max 00:30:27.007 PCIE (0000:00:06.0) NSID 1 from core 0: 45240.61 530.16 2834.11 970.08 13451.20 00:30:27.007 ======================================================== 00:30:27.007 Total : 45240.61 530.16 2834.11 970.08 13451.20 00:30:27.007 00:30:27.007 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:27.007 ================================================================================= 00:30:27.007 1.00000% : 1660.742us 00:30:27.007 10.00000% : 2249.076us 00:30:27.007 25.00000% : 2442.705us 00:30:27.007 50.00000% : 2740.596us 00:30:27.007 75.00000% : 3232.116us 00:30:27.007 90.00000% : 3574.691us 00:30:27.007 95.00000% : 3738.531us 00:30:27.007 98.00000% : 3932.160us 00:30:27.007 99.00000% : 4051.316us 00:30:27.007 99.50000% : 4200.262us 00:30:27.007 99.90000% : 5719.505us 00:30:27.007 99.99000% : 13405.091us 00:30:27.007 99.99900% : 13464.669us 00:30:27.007 99.99990% : 13464.669us 00:30:27.007 99.99999% : 13464.669us 00:30:27.007 00:30:27.007 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:30:27.007 ============================================================================== 00:30:27.007 Range in us Cumulative IO count 00:30:27.007 968.145 - 975.593: 0.0022% ( 1) 00:30:27.007 1154.327 - 1161.775: 0.0044% ( 1) 00:30:27.007 1161.775 - 1169.222: 0.0066% ( 1) 00:30:27.007 1176.669 - 1184.116: 0.0110% ( 2) 00:30:27.007 1184.116 - 1191.564: 0.0155% ( 2) 00:30:27.007 1191.564 - 1199.011: 0.0177% ( 1) 00:30:27.007 1206.458 - 1213.905: 0.0331% ( 7) 00:30:27.007 1213.905 - 1221.353: 0.0663% ( 15) 00:30:27.007 1221.353 - 1228.800: 0.0972% ( 14) 00:30:27.007 1228.800 - 1236.247: 0.0994% ( 1) 00:30:27.007 1236.247 - 1243.695: 0.1126% ( 6) 00:30:27.007 1243.695 - 1251.142: 0.1193% ( 3) 00:30:27.007 1251.142 - 1258.589: 0.1568% ( 17) 00:30:27.007 1258.589 - 1266.036: 0.1635% ( 3) 00:30:27.007 1273.484 - 1280.931: 0.1679% ( 2) 00:30:27.007 1280.931 - 1288.378: 0.1723% ( 2) 00:30:27.007 1288.378 - 1295.825: 0.2010% ( 13) 00:30:27.007 1303.273 - 1310.720: 0.2054% ( 2) 00:30:27.007 1325.615 - 1333.062: 0.2076% ( 1) 00:30:27.007 1333.062 - 1340.509: 0.2098% ( 1) 00:30:27.007 1340.509 - 1347.956: 0.2231% ( 6) 00:30:27.007 1347.956 - 1355.404: 0.2408% ( 8) 00:30:27.007 1355.404 - 1362.851: 0.2695% ( 13) 00:30:27.007 1362.851 - 1370.298: 0.2717% ( 1) 00:30:27.007 1370.298 - 1377.745: 0.3137% ( 19) 00:30:27.007 1377.745 - 1385.193: 0.3159% ( 1) 00:30:27.007 1385.193 - 1392.640: 0.3225% ( 3) 00:30:27.007 1392.640 - 1400.087: 0.4042% ( 37) 00:30:27.007 1400.087 - 1407.535: 0.4108% ( 3) 00:30:27.007 1407.535 - 1414.982: 0.4506% ( 18) 00:30:27.007 1414.982 - 1422.429: 0.4594% ( 4) 00:30:27.007 1422.429 - 1429.876: 0.4683% ( 4) 00:30:27.007 1429.876 - 1437.324: 0.4749% ( 3) 00:30:27.007 1437.324 - 1444.771: 0.4859% ( 5) 00:30:27.007 1444.771 - 1452.218: 0.4948% ( 4) 00:30:27.007 1452.218 - 1459.665: 0.5169% ( 10) 00:30:27.007 1459.665 - 1467.113: 0.5213% ( 2) 00:30:27.007 1467.113 - 1474.560: 0.5323% ( 5) 00:30:27.007 1474.560 - 1482.007: 0.5412% ( 4) 00:30:27.007 1482.007 - 1489.455: 0.5500% ( 4) 00:30:27.007 1489.455 - 1496.902: 0.5655% ( 7) 00:30:27.007 1496.902 - 1504.349: 0.5743% ( 4) 00:30:27.007 1504.349 - 1511.796: 0.5920% ( 8) 00:30:27.007 1511.796 - 1519.244: 0.6339% ( 19) 00:30:27.007 1519.244 - 1526.691: 0.6671% ( 15) 00:30:27.007 1526.691 - 1534.138: 0.6759% ( 4) 00:30:27.007 1534.138 - 1541.585: 0.6847% ( 4) 00:30:27.007 1541.585 - 1549.033: 0.7068% ( 10) 00:30:27.007 1549.033 - 1556.480: 0.7245% ( 8) 00:30:27.007 1556.480 - 1563.927: 0.7377% ( 6) 00:30:27.007 1563.927 - 1571.375: 0.7510% ( 6) 00:30:27.007 1571.375 - 1578.822: 0.7665% ( 7) 00:30:27.007 1578.822 - 1586.269: 0.7775% ( 5) 00:30:27.007 1586.269 - 1593.716: 0.7908% ( 6) 00:30:27.007 1593.716 - 1601.164: 0.8173% ( 12) 00:30:27.007 1601.164 - 1608.611: 0.8261% ( 4) 00:30:27.007 1608.611 - 1616.058: 0.8371% ( 5) 00:30:27.007 1616.058 - 1623.505: 0.8703% ( 15) 00:30:27.007 1623.505 - 1630.953: 0.8946% ( 11) 00:30:27.007 1630.953 - 1638.400: 0.9100% ( 7) 00:30:27.007 1638.400 - 1645.847: 0.9896% ( 36) 00:30:27.007 1645.847 - 1653.295: 0.9984% ( 4) 00:30:27.007 1653.295 - 1660.742: 1.0359% ( 17) 00:30:27.007 1660.742 - 1668.189: 1.0426% ( 3) 00:30:27.007 1668.189 - 1675.636: 1.0470% ( 2) 00:30:27.007 1675.636 - 1683.084: 1.0514% ( 2) 00:30:27.007 1683.084 - 1690.531: 1.0580% ( 3) 00:30:27.007 1690.531 - 1697.978: 1.0647% ( 3) 00:30:27.007 1697.978 - 1705.425: 1.0691% ( 2) 00:30:27.007 1705.425 - 1712.873: 1.0757% ( 3) 00:30:27.007 1712.873 - 1720.320: 1.0823% ( 3) 00:30:27.007 1720.320 - 1727.767: 1.0978% ( 7) 00:30:27.007 1727.767 - 1735.215: 1.1155% ( 8) 00:30:27.007 1735.215 - 1742.662: 1.1442% ( 13) 00:30:27.007 1742.662 - 1750.109: 1.1751% ( 14) 00:30:27.007 1750.109 - 1757.556: 1.1950% ( 9) 00:30:27.007 1757.556 - 1765.004: 1.2126% ( 8) 00:30:27.007 1765.004 - 1772.451: 1.2281% ( 7) 00:30:27.007 1772.451 - 1779.898: 1.2480% ( 9) 00:30:27.007 1779.898 - 1787.345: 1.2634% ( 7) 00:30:27.007 1787.345 - 1794.793: 1.2745% ( 5) 00:30:27.007 1794.793 - 1802.240: 1.2900% ( 7) 00:30:27.007 1802.240 - 1809.687: 1.3120% ( 10) 00:30:27.007 1809.687 - 1817.135: 1.3319% ( 9) 00:30:27.007 1817.135 - 1824.582: 1.3606% ( 13) 00:30:27.007 1824.582 - 1832.029: 1.3783% ( 8) 00:30:27.007 1832.029 - 1839.476: 1.4114% ( 15) 00:30:27.007 1839.476 - 1846.924: 1.4976% ( 39) 00:30:27.007 1846.924 - 1854.371: 1.5263% ( 13) 00:30:27.007 1854.371 - 1861.818: 1.5550% ( 13) 00:30:27.008 1861.818 - 1869.265: 1.6058% ( 23) 00:30:27.008 1869.265 - 1876.713: 1.6301% ( 11) 00:30:27.008 1876.713 - 1884.160: 1.6986% ( 31) 00:30:27.008 1884.160 - 1891.607: 1.7406% ( 19) 00:30:27.008 1891.607 - 1899.055: 1.7671% ( 12) 00:30:27.008 1899.055 - 1906.502: 1.8090% ( 19) 00:30:27.008 1906.502 - 1921.396: 1.8731% ( 29) 00:30:27.008 1921.396 - 1936.291: 2.0277% ( 70) 00:30:27.008 1936.291 - 1951.185: 2.1426% ( 52) 00:30:27.008 1951.185 - 1966.080: 2.2331% ( 41) 00:30:27.008 1966.080 - 1980.975: 2.3325% ( 45) 00:30:27.008 1980.975 - 1995.869: 2.4717% ( 63) 00:30:27.008 1995.869 - 2010.764: 2.6175% ( 66) 00:30:27.008 2010.764 - 2025.658: 2.8118% ( 88) 00:30:27.008 2025.658 - 2040.553: 3.0901% ( 126) 00:30:27.008 2040.553 - 2055.447: 3.3751% ( 129) 00:30:27.008 2055.447 - 2070.342: 3.5982% ( 101) 00:30:27.008 2070.342 - 2085.236: 3.8323% ( 106) 00:30:27.008 2085.236 - 2100.131: 4.1592% ( 148) 00:30:27.008 2100.131 - 2115.025: 4.5656% ( 184) 00:30:27.008 2115.025 - 2129.920: 4.9721% ( 184) 00:30:27.008 2129.920 - 2144.815: 5.4249% ( 205) 00:30:27.008 2144.815 - 2159.709: 5.9196% ( 224) 00:30:27.008 2159.709 - 2174.604: 6.5447% ( 283) 00:30:27.008 2174.604 - 2189.498: 7.2273% ( 309) 00:30:27.008 2189.498 - 2204.393: 7.9297% ( 318) 00:30:27.008 2204.393 - 2219.287: 8.7469% ( 370) 00:30:27.008 2219.287 - 2234.182: 9.6062% ( 389) 00:30:27.008 2234.182 - 2249.076: 10.5758% ( 439) 00:30:27.008 2249.076 - 2263.971: 11.5389% ( 436) 00:30:27.008 2263.971 - 2278.865: 12.4887% ( 430) 00:30:27.008 2278.865 - 2293.760: 13.4451% ( 433) 00:30:27.008 2293.760 - 2308.655: 14.6754% ( 557) 00:30:27.008 2308.655 - 2323.549: 15.7953% ( 507) 00:30:27.008 2323.549 - 2338.444: 16.9019% ( 501) 00:30:27.008 2338.444 - 2353.338: 18.0593% ( 524) 00:30:27.008 2353.338 - 2368.233: 19.2631% ( 545) 00:30:27.008 2368.233 - 2383.127: 20.5553% ( 585) 00:30:27.008 2383.127 - 2398.022: 21.7989% ( 563) 00:30:27.008 2398.022 - 2412.916: 22.9960% ( 542) 00:30:27.008 2412.916 - 2427.811: 24.2330% ( 560) 00:30:27.008 2427.811 - 2442.705: 25.5804% ( 610) 00:30:27.008 2442.705 - 2457.600: 26.8328% ( 567) 00:30:27.008 2457.600 - 2472.495: 28.1691% ( 605) 00:30:27.008 2472.495 - 2487.389: 29.4171% ( 565) 00:30:27.008 2487.389 - 2502.284: 30.7998% ( 626) 00:30:27.008 2502.284 - 2517.178: 32.1074% ( 592) 00:30:27.008 2517.178 - 2532.073: 33.4460% ( 606) 00:30:27.008 2532.073 - 2546.967: 34.6918% ( 564) 00:30:27.008 2546.967 - 2561.862: 36.0104% ( 597) 00:30:27.008 2561.862 - 2576.756: 37.2164% ( 546) 00:30:27.008 2576.756 - 2591.651: 38.4313% ( 550) 00:30:27.008 2591.651 - 2606.545: 39.7080% ( 578) 00:30:27.008 2606.545 - 2621.440: 40.9184% ( 548) 00:30:27.008 2621.440 - 2636.335: 42.0538% ( 514) 00:30:27.008 2636.335 - 2651.229: 43.2266% ( 531) 00:30:27.008 2651.229 - 2666.124: 44.3752% ( 520) 00:30:27.008 2666.124 - 2681.018: 45.5570% ( 535) 00:30:27.008 2681.018 - 2695.913: 46.7078% ( 521) 00:30:27.008 2695.913 - 2710.807: 47.9182% ( 548) 00:30:27.008 2710.807 - 2725.702: 48.9784% ( 480) 00:30:27.008 2725.702 - 2740.596: 50.1425% ( 527) 00:30:27.008 2740.596 - 2755.491: 51.2933% ( 521) 00:30:27.008 2755.491 - 2770.385: 52.2828% ( 448) 00:30:27.008 2770.385 - 2785.280: 53.2481% ( 437) 00:30:27.008 2785.280 - 2800.175: 54.1603% ( 413) 00:30:27.008 2800.175 - 2815.069: 55.0681% ( 411) 00:30:27.008 2815.069 - 2829.964: 55.8744% ( 365) 00:30:27.008 2829.964 - 2844.858: 56.7380% ( 391) 00:30:27.008 2844.858 - 2859.753: 57.5531% ( 369) 00:30:27.008 2859.753 - 2874.647: 58.4101% ( 388) 00:30:27.008 2874.647 - 2889.542: 59.2119% ( 363) 00:30:27.008 2889.542 - 2904.436: 59.9629% ( 340) 00:30:27.008 2904.436 - 2919.331: 60.7139% ( 340) 00:30:27.008 2919.331 - 2934.225: 61.4914% ( 352) 00:30:27.008 2934.225 - 2949.120: 62.2159% ( 328) 00:30:27.008 2949.120 - 2964.015: 62.9271% ( 322) 00:30:27.008 2964.015 - 2978.909: 63.6936% ( 347) 00:30:27.008 2978.909 - 2993.804: 64.4004% ( 320) 00:30:27.008 2993.804 - 3008.698: 65.0962% ( 315) 00:30:27.008 3008.698 - 3023.593: 65.7721% ( 306) 00:30:27.008 3023.593 - 3038.487: 66.5054% ( 332) 00:30:27.008 3038.487 - 3053.382: 67.1835% ( 307) 00:30:27.008 3053.382 - 3068.276: 67.8616% ( 307) 00:30:27.008 3068.276 - 3083.171: 68.5640% ( 318) 00:30:27.008 3083.171 - 3098.065: 69.2510% ( 311) 00:30:27.008 3098.065 - 3112.960: 69.9423% ( 313) 00:30:27.008 3112.960 - 3127.855: 70.6448% ( 318) 00:30:27.008 3127.855 - 3142.749: 71.3184% ( 305) 00:30:27.008 3142.749 - 3157.644: 71.9921% ( 305) 00:30:27.008 3157.644 - 3172.538: 72.6592% ( 302) 00:30:27.008 3172.538 - 3187.433: 73.3881% ( 330) 00:30:27.008 3187.433 - 3202.327: 74.0640% ( 306) 00:30:27.008 3202.327 - 3217.222: 74.7620% ( 316) 00:30:27.008 3217.222 - 3232.116: 75.4224% ( 299) 00:30:27.008 3232.116 - 3247.011: 76.1293% ( 320) 00:30:27.008 3247.011 - 3261.905: 76.7919% ( 300) 00:30:27.008 3261.905 - 3276.800: 77.4855% ( 314) 00:30:27.008 3276.800 - 3291.695: 78.1371% ( 295) 00:30:27.008 3291.695 - 3306.589: 78.8439% ( 320) 00:30:27.008 3306.589 - 3321.484: 79.5110% ( 302) 00:30:27.008 3321.484 - 3336.378: 80.1692% ( 298) 00:30:27.008 3336.378 - 3351.273: 80.8407% ( 304) 00:30:27.008 3351.273 - 3366.167: 81.4989% ( 298) 00:30:27.008 3366.167 - 3381.062: 82.1638% ( 301) 00:30:27.008 3381.062 - 3395.956: 82.8087% ( 292) 00:30:27.008 3395.956 - 3410.851: 83.4515% ( 291) 00:30:27.008 3410.851 - 3425.745: 84.0810% ( 285) 00:30:27.008 3425.745 - 3440.640: 84.7150% ( 287) 00:30:27.008 3440.640 - 3455.535: 85.3533% ( 289) 00:30:27.008 3455.535 - 3470.429: 85.9917% ( 289) 00:30:27.008 3470.429 - 3485.324: 86.5947% ( 273) 00:30:27.008 3485.324 - 3500.218: 87.2242% ( 285) 00:30:27.008 3500.218 - 3515.113: 87.8338% ( 276) 00:30:27.008 3515.113 - 3530.007: 88.4280% ( 269) 00:30:27.008 3530.007 - 3544.902: 88.9979% ( 258) 00:30:27.008 3544.902 - 3559.796: 89.5810% ( 264) 00:30:27.008 3559.796 - 3574.691: 90.1398% ( 253) 00:30:27.008 3574.691 - 3589.585: 90.6721% ( 241) 00:30:27.008 3589.585 - 3604.480: 91.2133% ( 245) 00:30:27.008 3604.480 - 3619.375: 91.7258% ( 232) 00:30:27.008 3619.375 - 3634.269: 92.2537% ( 239) 00:30:27.008 3634.269 - 3649.164: 92.7529% ( 226) 00:30:27.008 3649.164 - 3664.058: 93.2344% ( 218) 00:30:27.008 3664.058 - 3678.953: 93.6651% ( 195) 00:30:27.008 3678.953 - 3693.847: 94.1135% ( 203) 00:30:27.008 3693.847 - 3708.742: 94.5287% ( 188) 00:30:27.008 3708.742 - 3723.636: 94.9330% ( 183) 00:30:27.008 3723.636 - 3738.531: 95.2952% ( 164) 00:30:27.008 3738.531 - 3753.425: 95.6442% ( 158) 00:30:27.008 3753.425 - 3768.320: 95.9910% ( 157) 00:30:27.008 3768.320 - 3783.215: 96.2980% ( 139) 00:30:27.008 3783.215 - 3798.109: 96.5785% ( 127) 00:30:27.008 3798.109 - 3813.004: 96.8436% ( 120) 00:30:27.008 3813.004 - 3842.793: 97.2876% ( 201) 00:30:27.008 3842.793 - 3872.582: 97.6631% ( 170) 00:30:27.008 3872.582 - 3902.371: 97.9988% ( 152) 00:30:27.008 3902.371 - 3932.160: 98.2904% ( 132) 00:30:27.008 3932.160 - 3961.949: 98.5400% ( 113) 00:30:27.008 3961.949 - 3991.738: 98.7542% ( 97) 00:30:27.008 3991.738 - 4021.527: 98.9552% ( 91) 00:30:27.008 4021.527 - 4051.316: 99.1231% ( 76) 00:30:27.008 4051.316 - 4081.105: 99.2291% ( 48) 00:30:27.008 4081.105 - 4110.895: 99.3108% ( 37) 00:30:27.008 4110.895 - 4140.684: 99.3859% ( 34) 00:30:27.008 4140.684 - 4170.473: 99.4544% ( 31) 00:30:27.008 4170.473 - 4200.262: 99.5119% ( 26) 00:30:27.008 4200.262 - 4230.051: 99.5582% ( 21) 00:30:27.008 4230.051 - 4259.840: 99.5892% ( 14) 00:30:27.008 4259.840 - 4289.629: 99.6267% ( 17) 00:30:27.008 4289.629 - 4319.418: 99.6444% ( 8) 00:30:27.008 4319.418 - 4349.207: 99.6621% ( 8) 00:30:27.008 4349.207 - 4378.996: 99.6753% ( 6) 00:30:27.008 4378.996 - 4408.785: 99.6819% ( 3) 00:30:27.008 4408.785 - 4438.575: 99.6952% ( 6) 00:30:27.008 4438.575 - 4468.364: 99.7084% ( 6) 00:30:27.008 4468.364 - 4498.153: 99.7195% ( 5) 00:30:27.008 4498.153 - 4527.942: 99.7283% ( 4) 00:30:27.008 4527.942 - 4557.731: 99.7394% ( 5) 00:30:27.008 4557.731 - 4587.520: 99.7460% ( 3) 00:30:27.008 4587.520 - 4617.309: 99.7548% ( 4) 00:30:27.008 4617.309 - 4647.098: 99.7681% ( 6) 00:30:27.008 4647.098 - 4676.887: 99.7791% ( 5) 00:30:27.008 4676.887 - 4706.676: 99.7857% ( 3) 00:30:27.008 4706.676 - 4736.465: 99.7902% ( 2) 00:30:27.008 4736.465 - 4766.255: 99.7946% ( 2) 00:30:27.008 4766.255 - 4796.044: 99.8012% ( 3) 00:30:27.008 4796.044 - 4825.833: 99.8056% ( 2) 00:30:27.008 4825.833 - 4855.622: 99.8167% ( 5) 00:30:27.008 4855.622 - 4885.411: 99.8233% ( 3) 00:30:27.008 4885.411 - 4915.200: 99.8277% ( 2) 00:30:27.008 4915.200 - 4944.989: 99.8343% ( 3) 00:30:27.008 4944.989 - 4974.778: 99.8388% ( 2) 00:30:27.008 4974.778 - 5004.567: 99.8410% ( 1) 00:30:27.008 5004.567 - 5034.356: 99.8454% ( 2) 00:30:27.008 5034.356 - 5064.145: 99.8476% ( 1) 00:30:27.008 5064.145 - 5093.935: 99.8498% ( 1) 00:30:27.008 5093.935 - 5123.724: 99.8520% ( 1) 00:30:27.008 5123.724 - 5153.513: 99.8542% ( 1) 00:30:27.008 5153.513 - 5183.302: 99.8564% ( 1) 00:30:27.008 5183.302 - 5213.091: 99.8586% ( 1) 00:30:27.008 5213.091 - 5242.880: 99.8608% ( 1) 00:30:27.008 5242.880 - 5272.669: 99.8653% ( 2) 00:30:27.008 5272.669 - 5302.458: 99.8675% ( 1) 00:30:27.008 5302.458 - 5332.247: 99.8697% ( 1) 00:30:27.008 5332.247 - 5362.036: 99.8719% ( 1) 00:30:27.008 5362.036 - 5391.825: 99.8741% ( 1) 00:30:27.008 5391.825 - 5421.615: 99.8785% ( 2) 00:30:27.008 5451.404 - 5481.193: 99.8829% ( 2) 00:30:27.008 5481.193 - 5510.982: 99.8851% ( 1) 00:30:27.008 5510.982 - 5540.771: 99.8874% ( 1) 00:30:27.009 5540.771 - 5570.560: 99.8896% ( 1) 00:30:27.009 5570.560 - 5600.349: 99.8918% ( 1) 00:30:27.009 5600.349 - 5630.138: 99.8940% ( 1) 00:30:27.009 5630.138 - 5659.927: 99.8962% ( 1) 00:30:27.009 5659.927 - 5689.716: 99.8984% ( 1) 00:30:27.009 5689.716 - 5719.505: 99.9006% ( 1) 00:30:27.009 5719.505 - 5749.295: 99.9028% ( 1) 00:30:27.009 5749.295 - 5779.084: 99.9072% ( 2) 00:30:27.009 5808.873 - 5838.662: 99.9094% ( 1) 00:30:27.009 5838.662 - 5868.451: 99.9139% ( 2) 00:30:27.009 5868.451 - 5898.240: 99.9161% ( 1) 00:30:27.009 5898.240 - 5928.029: 99.9183% ( 1) 00:30:27.009 5928.029 - 5957.818: 99.9205% ( 1) 00:30:27.009 5957.818 - 5987.607: 99.9249% ( 2) 00:30:27.009 5987.607 - 6017.396: 99.9271% ( 1) 00:30:27.009 6017.396 - 6047.185: 99.9293% ( 1) 00:30:27.009 6047.185 - 6076.975: 99.9315% ( 1) 00:30:27.009 6076.975 - 6106.764: 99.9337% ( 1) 00:30:27.009 6106.764 - 6136.553: 99.9359% ( 1) 00:30:27.009 6136.553 - 6166.342: 99.9404% ( 2) 00:30:27.009 6166.342 - 6196.131: 99.9426% ( 1) 00:30:27.009 6196.131 - 6225.920: 99.9448% ( 1) 00:30:27.009 6225.920 - 6255.709: 99.9470% ( 1) 00:30:27.009 6285.498 - 6315.287: 99.9492% ( 1) 00:30:27.009 9472.931 - 9532.509: 99.9514% ( 1) 00:30:27.009 9532.509 - 9592.087: 99.9536% ( 1) 00:30:27.009 9592.087 - 9651.665: 99.9558% ( 1) 00:30:27.009 9651.665 - 9711.244: 99.9625% ( 3) 00:30:27.009 9711.244 - 9770.822: 99.9713% ( 4) 00:30:27.009 9949.556 - 10009.135: 99.9823% ( 5) 00:30:27.009 10187.869 - 10247.447: 99.9845% ( 1) 00:30:27.009 10604.916 - 10664.495: 99.9867% ( 1) 00:30:27.009 13345.513 - 13405.091: 99.9912% ( 2) 00:30:27.009 13405.091 - 13464.669: 100.0000% ( 4) 00:30:27.009 00:30:27.009 07:31:00 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:30:27.009 00:30:27.009 real 0m2.606s 00:30:27.009 user 0m2.227s 00:30:27.009 sys 0m0.222s 00:30:27.009 07:31:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:27.009 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:30:27.009 ************************************ 00:30:27.009 END TEST nvme_perf 00:30:27.009 ************************************ 00:30:27.009 07:31:00 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:27.009 07:31:00 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:30:27.009 07:31:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:27.009 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:30:27.009 ************************************ 00:30:27.009 START TEST nvme_hello_world 00:30:27.009 ************************************ 00:30:27.009 07:31:00 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:30:27.268 Initializing NVMe Controllers 00:30:27.268 Attached to 0000:00:06.0 00:30:27.268 Namespace ID: 1 size: 5GB 00:30:27.268 Initialization complete. 00:30:27.268 INFO: using host memory buffer for IO 00:30:27.268 Hello world! 00:30:27.268 00:30:27.268 real 0m0.360s 00:30:27.268 user 0m0.149s 00:30:27.268 sys 0m0.119s 00:30:27.268 07:31:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:27.268 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:30:27.268 ************************************ 00:30:27.268 END TEST nvme_hello_world 00:30:27.268 ************************************ 00:30:27.268 07:31:00 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:27.268 07:31:00 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:27.268 07:31:00 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:27.268 07:31:00 -- common/autotest_common.sh@10 -- # set +x 00:30:27.268 ************************************ 00:30:27.268 START TEST nvme_sgl 00:30:27.268 ************************************ 00:30:27.268 07:31:00 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:30:27.525 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:30:27.525 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:30:27.525 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:30:27.525 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:30:27.525 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:30:27.525 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:30:27.783 NVMe Readv/Writev Request test 00:30:27.783 Attached to 0000:00:06.0 00:30:27.783 0000:00:06.0: build_io_request_2 test passed 00:30:27.783 0000:00:06.0: build_io_request_4 test passed 00:30:27.783 0000:00:06.0: build_io_request_5 test passed 00:30:27.783 0000:00:06.0: build_io_request_6 test passed 00:30:27.783 0000:00:06.0: build_io_request_7 test passed 00:30:27.783 0000:00:06.0: build_io_request_10 test passed 00:30:27.783 Cleaning up... 00:30:27.783 00:30:27.783 real 0m0.484s 00:30:27.783 user 0m0.245s 00:30:27.783 sys 0m0.158s 00:30:27.783 07:31:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:27.783 07:31:01 -- common/autotest_common.sh@10 -- # set +x 00:30:27.783 ************************************ 00:30:27.783 END TEST nvme_sgl 00:30:27.783 ************************************ 00:30:27.783 07:31:01 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:27.783 07:31:01 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:27.783 07:31:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:27.783 07:31:01 -- common/autotest_common.sh@10 -- # set +x 00:30:27.783 ************************************ 00:30:27.783 START TEST nvme_e2edp 00:30:27.783 ************************************ 00:30:27.783 07:31:01 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:30:28.042 NVMe Write/Read with End-to-End data protection test 00:30:28.042 Attached to 0000:00:06.0 00:30:28.042 Cleaning up... 00:30:28.042 00:30:28.042 real 0m0.346s 00:30:28.042 user 0m0.121s 00:30:28.042 sys 0m0.142s 00:30:28.042 07:31:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:28.042 07:31:01 -- common/autotest_common.sh@10 -- # set +x 00:30:28.042 ************************************ 00:30:28.042 END TEST nvme_e2edp 00:30:28.042 ************************************ 00:30:28.042 07:31:01 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:28.042 07:31:01 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:28.042 07:31:01 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:28.042 07:31:01 -- common/autotest_common.sh@10 -- # set +x 00:30:28.042 ************************************ 00:30:28.042 START TEST nvme_reserve 00:30:28.042 ************************************ 00:30:28.042 07:31:01 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:30:28.608 ===================================================== 00:30:28.608 NVMe Controller at PCI bus 0, device 6, function 0 00:30:28.608 ===================================================== 00:30:28.608 Reservations: Not Supported 00:30:28.608 Reservation test passed 00:30:28.608 ************************************ 00:30:28.608 END TEST nvme_reserve 00:30:28.608 ************************************ 00:30:28.608 00:30:28.608 real 0m0.324s 00:30:28.608 user 0m0.112s 00:30:28.608 sys 0m0.145s 00:30:28.608 07:31:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:28.608 07:31:02 -- common/autotest_common.sh@10 -- # set +x 00:30:28.608 07:31:02 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:28.608 07:31:02 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:28.608 07:31:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:28.608 07:31:02 -- common/autotest_common.sh@10 -- # set +x 00:30:28.608 ************************************ 00:30:28.608 START TEST nvme_err_injection 00:30:28.608 ************************************ 00:30:28.608 07:31:02 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:30:28.867 NVMe Error Injection test 00:30:28.867 Attached to 0000:00:06.0 00:30:28.867 0000:00:06.0: get features failed as expected 00:30:28.867 0000:00:06.0: get features successfully as expected 00:30:28.867 0000:00:06.0: read failed as expected 00:30:28.867 0000:00:06.0: read successfully as expected 00:30:28.867 Cleaning up... 00:30:28.867 00:30:28.867 real 0m0.370s 00:30:28.867 user 0m0.148s 00:30:28.867 sys 0m0.141s 00:30:28.867 07:31:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:28.867 ************************************ 00:30:28.867 END TEST nvme_err_injection 00:30:28.867 ************************************ 00:30:28.867 07:31:02 -- common/autotest_common.sh@10 -- # set +x 00:30:28.867 07:31:02 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:28.867 07:31:02 -- common/autotest_common.sh@1075 -- # '[' 9 -le 1 ']' 00:30:28.867 07:31:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:28.867 07:31:02 -- common/autotest_common.sh@10 -- # set +x 00:30:28.867 ************************************ 00:30:28.867 START TEST nvme_overhead 00:30:28.867 ************************************ 00:30:28.867 07:31:02 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:30:30.246 Initializing NVMe Controllers 00:30:30.246 Attached to 0000:00:06.0 00:30:30.246 Initialization complete. Launching workers. 00:30:30.246 submit (in ns) avg, min, max = 17401.4, 12742.7, 87330.9 00:30:30.246 complete (in ns) avg, min, max = 12963.3, 8424.5, 82431.8 00:30:30.246 00:30:30.246 Submit histogram 00:30:30.246 ================ 00:30:30.246 Range in us Cumulative Count 00:30:30.246 12.742 - 12.800: 0.0633% ( 5) 00:30:30.246 12.800 - 12.858: 0.2151% ( 12) 00:30:30.246 12.858 - 12.916: 0.4302% ( 17) 00:30:30.246 12.916 - 12.975: 0.7338% ( 24) 00:30:30.246 12.975 - 13.033: 1.1007% ( 29) 00:30:30.246 13.033 - 13.091: 1.4550% ( 28) 00:30:30.246 13.091 - 13.149: 1.9357% ( 38) 00:30:30.246 13.149 - 13.207: 2.6695% ( 58) 00:30:30.246 13.207 - 13.265: 3.2262% ( 44) 00:30:30.246 13.265 - 13.324: 3.8841% ( 52) 00:30:30.246 13.324 - 13.382: 4.8330% ( 75) 00:30:30.246 13.382 - 13.440: 5.5541% ( 57) 00:30:30.246 13.440 - 13.498: 6.4398% ( 70) 00:30:30.246 13.498 - 13.556: 7.0597% ( 49) 00:30:30.246 13.556 - 13.615: 7.9074% ( 67) 00:30:30.246 13.615 - 13.673: 8.7424% ( 66) 00:30:30.246 13.673 - 13.731: 9.5521% ( 64) 00:30:30.246 13.731 - 13.789: 10.2480% ( 55) 00:30:30.246 13.789 - 13.847: 10.8932% ( 51) 00:30:30.246 13.847 - 13.905: 11.3487% ( 36) 00:30:30.246 13.905 - 13.964: 11.8548% ( 40) 00:30:30.246 13.964 - 14.022: 12.2976% ( 35) 00:30:30.246 14.022 - 14.080: 12.7783% ( 38) 00:30:30.246 14.080 - 14.138: 13.2465% ( 37) 00:30:30.246 14.138 - 14.196: 13.6134% ( 29) 00:30:30.246 14.196 - 14.255: 14.0056% ( 31) 00:30:30.246 14.255 - 14.313: 14.3472% ( 27) 00:30:30.246 14.313 - 14.371: 14.6635% ( 25) 00:30:30.246 14.371 - 14.429: 15.0810% ( 33) 00:30:30.246 14.429 - 14.487: 16.2449% ( 92) 00:30:30.246 14.487 - 14.545: 17.8391% ( 126) 00:30:30.246 14.545 - 14.604: 20.5972% ( 218) 00:30:30.246 14.604 - 14.662: 24.7849% ( 331) 00:30:30.246 14.662 - 14.720: 28.9727% ( 331) 00:30:30.246 14.720 - 14.778: 32.6037% ( 287) 00:30:30.246 14.778 - 14.836: 35.8806% ( 259) 00:30:30.246 14.836 - 14.895: 38.0567% ( 172) 00:30:30.246 14.895 - 15.011: 40.5491% ( 197) 00:30:30.246 15.011 - 15.127: 41.6498% ( 87) 00:30:30.246 15.127 - 15.244: 42.1812% ( 42) 00:30:30.246 15.244 - 15.360: 42.5987% ( 33) 00:30:30.246 15.360 - 15.476: 43.4464% ( 67) 00:30:30.246 15.476 - 15.593: 45.3441% ( 150) 00:30:30.246 15.593 - 15.709: 48.4438% ( 245) 00:30:30.246 15.709 - 15.825: 51.7966% ( 265) 00:30:30.246 15.825 - 15.942: 54.8077% ( 238) 00:30:30.246 15.942 - 16.058: 57.2621% ( 194) 00:30:30.246 16.058 - 16.175: 58.9828% ( 136) 00:30:30.246 16.175 - 16.291: 60.6908% ( 135) 00:30:30.246 16.291 - 16.407: 62.0445% ( 107) 00:30:30.246 16.407 - 16.524: 62.9302% ( 70) 00:30:30.246 16.524 - 16.640: 63.5248% ( 47) 00:30:30.246 16.640 - 16.756: 63.9170% ( 31) 00:30:30.246 16.756 - 16.873: 64.3092% ( 31) 00:30:30.246 16.873 - 16.989: 64.5622% ( 20) 00:30:30.246 16.989 - 17.105: 64.7520% ( 15) 00:30:30.246 17.105 - 17.222: 64.9038% ( 12) 00:30:30.246 17.222 - 17.338: 65.1316% ( 18) 00:30:30.246 17.338 - 17.455: 65.2454% ( 9) 00:30:30.246 17.455 - 17.571: 65.5238% ( 22) 00:30:30.246 17.571 - 17.687: 66.9155% ( 110) 00:30:30.246 17.687 - 17.804: 69.9393% ( 239) 00:30:30.246 17.804 - 17.920: 73.0010% ( 242) 00:30:30.246 17.920 - 18.036: 75.3289% ( 184) 00:30:30.246 18.036 - 18.153: 76.6068% ( 101) 00:30:30.246 18.153 - 18.269: 77.2520% ( 51) 00:30:30.246 18.269 - 18.385: 77.6189% ( 29) 00:30:30.246 18.385 - 18.502: 77.9732% ( 28) 00:30:30.246 18.502 - 18.618: 78.4160% ( 35) 00:30:30.246 18.618 - 18.735: 78.7323% ( 25) 00:30:30.246 18.735 - 18.851: 79.4534% ( 57) 00:30:30.246 18.851 - 18.967: 80.3264% ( 69) 00:30:30.246 18.967 - 19.084: 81.4777% ( 91) 00:30:30.246 19.084 - 19.200: 82.5658% ( 86) 00:30:30.246 19.200 - 19.316: 83.2996% ( 58) 00:30:30.246 19.316 - 19.433: 83.8942% ( 47) 00:30:30.246 19.433 - 19.549: 84.4762% ( 46) 00:30:30.246 19.549 - 19.665: 84.8811% ( 32) 00:30:30.246 19.665 - 19.782: 85.1847% ( 24) 00:30:30.246 19.782 - 19.898: 85.5643% ( 30) 00:30:30.246 19.898 - 20.015: 85.9059% ( 27) 00:30:30.246 20.015 - 20.131: 86.0703% ( 13) 00:30:30.247 20.131 - 20.247: 86.3993% ( 26) 00:30:30.247 20.247 - 20.364: 86.6017% ( 16) 00:30:30.247 20.364 - 20.480: 86.9180% ( 25) 00:30:30.247 20.480 - 20.596: 87.0698% ( 12) 00:30:30.247 20.596 - 20.713: 87.2849% ( 17) 00:30:30.247 20.713 - 20.829: 87.4873% ( 16) 00:30:30.247 20.829 - 20.945: 87.7024% ( 17) 00:30:30.247 20.945 - 21.062: 87.8289% ( 10) 00:30:30.247 21.062 - 21.178: 88.0061% ( 14) 00:30:30.247 21.178 - 21.295: 88.1326% ( 10) 00:30:30.247 21.295 - 21.411: 88.2338% ( 8) 00:30:30.247 21.411 - 21.527: 88.3350% ( 8) 00:30:30.247 21.527 - 21.644: 88.4615% ( 10) 00:30:30.247 21.644 - 21.760: 88.6387% ( 14) 00:30:30.247 21.760 - 21.876: 88.9423% ( 24) 00:30:30.247 21.876 - 21.993: 89.1574% ( 17) 00:30:30.247 21.993 - 22.109: 89.3092% ( 12) 00:30:30.247 22.109 - 22.225: 89.4484% ( 11) 00:30:30.247 22.225 - 22.342: 89.5369% ( 7) 00:30:30.247 22.342 - 22.458: 89.7014% ( 13) 00:30:30.247 22.458 - 22.575: 89.8532% ( 12) 00:30:30.247 22.575 - 22.691: 90.0430% ( 15) 00:30:30.247 22.691 - 22.807: 90.1569% ( 9) 00:30:30.247 22.807 - 22.924: 90.3467% ( 15) 00:30:30.247 22.924 - 23.040: 90.4732% ( 10) 00:30:30.247 23.040 - 23.156: 90.5997% ( 10) 00:30:30.247 23.156 - 23.273: 90.7515% ( 12) 00:30:30.247 23.273 - 23.389: 91.0046% ( 20) 00:30:30.247 23.389 - 23.505: 91.1437% ( 11) 00:30:30.247 23.505 - 23.622: 91.1817% ( 3) 00:30:30.247 23.622 - 23.738: 91.2829% ( 8) 00:30:30.247 23.738 - 23.855: 91.4347% ( 12) 00:30:30.247 23.855 - 23.971: 91.5486% ( 9) 00:30:30.247 23.971 - 24.087: 91.6751% ( 10) 00:30:30.247 24.087 - 24.204: 91.8396% ( 13) 00:30:30.247 24.204 - 24.320: 92.0040% ( 13) 00:30:30.247 24.320 - 24.436: 92.1179% ( 9) 00:30:30.247 24.436 - 24.553: 92.2444% ( 10) 00:30:30.247 24.553 - 24.669: 92.3203% ( 6) 00:30:30.247 24.669 - 24.785: 92.4216% ( 8) 00:30:30.247 24.785 - 24.902: 92.5354% ( 9) 00:30:30.247 24.902 - 25.018: 92.6493% ( 9) 00:30:30.247 25.018 - 25.135: 92.6619% ( 1) 00:30:30.247 25.135 - 25.251: 92.7379% ( 6) 00:30:30.247 25.251 - 25.367: 92.8264% ( 7) 00:30:30.247 25.367 - 25.484: 92.9023% ( 6) 00:30:30.247 25.484 - 25.600: 92.9656% ( 5) 00:30:30.247 25.600 - 25.716: 92.9909% ( 2) 00:30:30.247 25.833 - 25.949: 93.0921% ( 8) 00:30:30.247 25.949 - 26.065: 93.2439% ( 12) 00:30:30.247 26.065 - 26.182: 93.3831% ( 11) 00:30:30.247 26.182 - 26.298: 93.4464% ( 5) 00:30:30.247 26.298 - 26.415: 93.4717% ( 2) 00:30:30.247 26.415 - 26.531: 93.5223% ( 4) 00:30:30.247 26.531 - 26.647: 93.5855% ( 5) 00:30:30.247 26.647 - 26.764: 93.6614% ( 6) 00:30:30.247 26.764 - 26.880: 93.6994% ( 3) 00:30:30.247 26.880 - 26.996: 93.7627% ( 5) 00:30:30.247 26.996 - 27.113: 93.8259% ( 5) 00:30:30.247 27.113 - 27.229: 93.8639% ( 3) 00:30:30.247 27.229 - 27.345: 93.9524% ( 7) 00:30:30.247 27.345 - 27.462: 93.9904% ( 3) 00:30:30.247 27.462 - 27.578: 94.0157% ( 2) 00:30:30.247 27.578 - 27.695: 94.0789% ( 5) 00:30:30.247 27.695 - 27.811: 94.1169% ( 3) 00:30:30.247 27.811 - 27.927: 94.1675% ( 4) 00:30:30.247 27.927 - 28.044: 94.2055% ( 3) 00:30:30.247 28.044 - 28.160: 94.2940% ( 7) 00:30:30.247 28.160 - 28.276: 94.3952% ( 8) 00:30:30.247 28.276 - 28.393: 94.5218% ( 10) 00:30:30.247 28.393 - 28.509: 94.6103% ( 7) 00:30:30.247 28.509 - 28.625: 94.7368% ( 10) 00:30:30.247 28.625 - 28.742: 94.8507% ( 9) 00:30:30.247 28.742 - 28.858: 95.0278% ( 14) 00:30:30.247 28.858 - 28.975: 95.1797% ( 12) 00:30:30.247 28.975 - 29.091: 95.2935% ( 9) 00:30:30.247 29.091 - 29.207: 95.4200% ( 10) 00:30:30.247 29.207 - 29.324: 95.6098% ( 15) 00:30:30.247 29.324 - 29.440: 95.8122% ( 16) 00:30:30.247 29.440 - 29.556: 96.0273% ( 17) 00:30:30.247 29.556 - 29.673: 96.1918% ( 13) 00:30:30.247 29.673 - 29.789: 96.4448% ( 20) 00:30:30.247 29.789 - 30.022: 96.8623% ( 33) 00:30:30.247 30.022 - 30.255: 97.1786% ( 25) 00:30:30.247 30.255 - 30.487: 97.3811% ( 16) 00:30:30.247 30.487 - 30.720: 97.5455% ( 13) 00:30:30.247 30.720 - 30.953: 97.6721% ( 10) 00:30:30.247 30.953 - 31.185: 97.7480% ( 6) 00:30:30.247 31.185 - 31.418: 97.9378% ( 15) 00:30:30.247 31.418 - 31.651: 98.0390% ( 8) 00:30:30.247 31.651 - 31.884: 98.1528% ( 9) 00:30:30.247 31.884 - 32.116: 98.2287% ( 6) 00:30:30.247 32.116 - 32.349: 98.3047% ( 6) 00:30:30.247 32.349 - 32.582: 98.3679% ( 5) 00:30:30.247 32.582 - 32.815: 98.4059% ( 3) 00:30:30.247 32.815 - 33.047: 98.4312% ( 2) 00:30:30.247 33.047 - 33.280: 98.4565% ( 2) 00:30:30.247 33.280 - 33.513: 98.4691% ( 1) 00:30:30.247 33.745 - 33.978: 98.5197% ( 4) 00:30:30.247 33.978 - 34.211: 98.5450% ( 2) 00:30:30.247 34.211 - 34.444: 98.5830% ( 3) 00:30:30.247 34.444 - 34.676: 98.6083% ( 2) 00:30:30.247 34.676 - 34.909: 98.6463% ( 3) 00:30:30.247 34.909 - 35.142: 98.6969% ( 4) 00:30:30.247 35.142 - 35.375: 98.7222% ( 2) 00:30:30.247 35.375 - 35.607: 98.7475% ( 2) 00:30:30.247 35.840 - 36.073: 98.7601% ( 1) 00:30:30.247 36.073 - 36.305: 98.7728% ( 1) 00:30:30.247 36.305 - 36.538: 98.7981% ( 2) 00:30:30.247 36.538 - 36.771: 98.8107% ( 1) 00:30:30.247 36.771 - 37.004: 98.8234% ( 1) 00:30:30.247 37.004 - 37.236: 98.8360% ( 1) 00:30:30.247 37.236 - 37.469: 98.8487% ( 1) 00:30:30.247 37.702 - 37.935: 98.8740% ( 2) 00:30:30.247 37.935 - 38.167: 98.9119% ( 3) 00:30:30.247 38.400 - 38.633: 98.9499% ( 3) 00:30:30.247 38.633 - 38.865: 98.9626% ( 1) 00:30:30.247 38.865 - 39.098: 98.9879% ( 2) 00:30:30.247 39.098 - 39.331: 99.0258% ( 3) 00:30:30.247 39.331 - 39.564: 99.0511% ( 2) 00:30:30.247 39.564 - 39.796: 99.1270% ( 6) 00:30:30.247 39.796 - 40.029: 99.1523% ( 2) 00:30:30.247 40.029 - 40.262: 99.1650% ( 1) 00:30:30.247 40.262 - 40.495: 99.1903% ( 2) 00:30:30.247 40.727 - 40.960: 99.2029% ( 1) 00:30:30.247 41.658 - 41.891: 99.2156% ( 1) 00:30:30.247 42.356 - 42.589: 99.2409% ( 2) 00:30:30.247 43.753 - 43.985: 99.2535% ( 1) 00:30:30.247 44.218 - 44.451: 99.2662% ( 1) 00:30:30.247 44.451 - 44.684: 99.3041% ( 3) 00:30:30.247 44.684 - 44.916: 99.3548% ( 4) 00:30:30.247 44.916 - 45.149: 99.3674% ( 1) 00:30:30.247 45.149 - 45.382: 99.4180% ( 4) 00:30:30.247 45.382 - 45.615: 99.4433% ( 2) 00:30:30.247 45.615 - 45.847: 99.4686% ( 2) 00:30:30.247 46.080 - 46.313: 99.5066% ( 3) 00:30:30.247 46.313 - 46.545: 99.5825% ( 6) 00:30:30.247 46.778 - 47.011: 99.5951% ( 1) 00:30:30.247 47.244 - 47.476: 99.6078% ( 1) 00:30:30.247 47.709 - 47.942: 99.6204% ( 1) 00:30:30.247 48.873 - 49.105: 99.6331% ( 1) 00:30:30.247 49.338 - 49.571: 99.6457% ( 1) 00:30:30.247 49.571 - 49.804: 99.6711% ( 2) 00:30:30.247 49.804 - 50.036: 99.6837% ( 1) 00:30:30.247 50.269 - 50.502: 99.6964% ( 1) 00:30:30.247 50.502 - 50.735: 99.7090% ( 1) 00:30:30.247 50.735 - 50.967: 99.7217% ( 1) 00:30:30.247 50.967 - 51.200: 99.7343% ( 1) 00:30:30.247 51.433 - 51.665: 99.7470% ( 1) 00:30:30.247 52.364 - 52.596: 99.7596% ( 1) 00:30:30.247 53.295 - 53.527: 99.7849% ( 2) 00:30:30.247 54.225 - 54.458: 99.8102% ( 2) 00:30:30.247 54.458 - 54.691: 99.8229% ( 1) 00:30:30.247 54.691 - 54.924: 99.8355% ( 1) 00:30:30.247 55.389 - 55.622: 99.8482% ( 1) 00:30:30.247 56.320 - 56.553: 99.8608% ( 1) 00:30:30.247 57.018 - 57.251: 99.8735% ( 1) 00:30:30.247 58.880 - 59.113: 99.8861% ( 1) 00:30:30.247 60.975 - 61.440: 99.8988% ( 1) 00:30:30.247 61.440 - 61.905: 99.9114% ( 1) 00:30:30.247 67.025 - 67.491: 99.9241% ( 1) 00:30:30.247 71.680 - 72.145: 99.9367% ( 1) 00:30:30.247 73.076 - 73.542: 99.9494% ( 1) 00:30:30.247 74.473 - 74.938: 99.9620% ( 1) 00:30:30.247 80.058 - 80.524: 99.9747% ( 1) 00:30:30.247 81.455 - 81.920: 99.9873% ( 1) 00:30:30.247 87.040 - 87.505: 100.0000% ( 1) 00:30:30.247 00:30:30.247 Complete histogram 00:30:30.247 ================== 00:30:30.247 Range in us Cumulative Count 00:30:30.247 8.378 - 8.436: 0.0127% ( 1) 00:30:30.247 8.436 - 8.495: 0.0886% ( 6) 00:30:30.247 8.495 - 8.553: 0.3416% ( 20) 00:30:30.247 8.553 - 8.611: 0.5061% ( 13) 00:30:30.247 8.611 - 8.669: 0.7465% ( 19) 00:30:30.247 8.669 - 8.727: 1.1893% ( 35) 00:30:30.247 8.727 - 8.785: 1.6574% ( 37) 00:30:30.247 8.785 - 8.844: 2.1635% ( 40) 00:30:30.247 8.844 - 8.902: 2.9732% ( 64) 00:30:30.248 8.902 - 8.960: 3.6311% ( 52) 00:30:30.248 8.960 - 9.018: 4.1245% ( 39) 00:30:30.248 9.018 - 9.076: 4.5420% ( 33) 00:30:30.248 9.076 - 9.135: 5.2252% ( 54) 00:30:30.248 9.135 - 9.193: 5.7819% ( 44) 00:30:30.248 9.193 - 9.251: 6.2627% ( 38) 00:30:30.248 9.251 - 9.309: 6.5536% ( 23) 00:30:30.248 9.309 - 9.367: 6.8826% ( 26) 00:30:30.248 9.367 - 9.425: 7.2115% ( 26) 00:30:30.248 9.425 - 9.484: 7.5531% ( 27) 00:30:30.248 9.484 - 9.542: 7.8821% ( 26) 00:30:30.248 9.542 - 9.600: 8.0972% ( 17) 00:30:30.248 9.600 - 9.658: 8.6665% ( 45) 00:30:30.248 9.658 - 9.716: 9.7039% ( 82) 00:30:30.248 9.716 - 9.775: 12.5633% ( 226) 00:30:30.248 9.775 - 9.833: 17.4848% ( 389) 00:30:30.248 9.833 - 9.891: 21.9509% ( 353) 00:30:30.248 9.891 - 9.949: 26.8092% ( 384) 00:30:30.248 9.949 - 10.007: 31.0602% ( 336) 00:30:30.248 10.007 - 10.065: 34.2991% ( 256) 00:30:30.248 10.065 - 10.124: 36.3613% ( 163) 00:30:30.248 10.124 - 10.182: 37.9555% ( 126) 00:30:30.248 10.182 - 10.240: 39.1574% ( 95) 00:30:30.248 10.240 - 10.298: 39.9671% ( 64) 00:30:30.248 10.298 - 10.356: 40.4226% ( 36) 00:30:30.248 10.356 - 10.415: 40.7262% ( 24) 00:30:30.248 10.415 - 10.473: 41.2196% ( 39) 00:30:30.248 10.473 - 10.531: 41.6878% ( 37) 00:30:30.248 10.531 - 10.589: 41.9661% ( 22) 00:30:30.248 10.589 - 10.647: 42.1432% ( 14) 00:30:30.248 10.647 - 10.705: 42.4469% ( 24) 00:30:30.248 10.705 - 10.764: 42.7379% ( 23) 00:30:30.248 10.764 - 10.822: 43.2186% ( 38) 00:30:30.248 10.822 - 10.880: 43.4970% ( 22) 00:30:30.248 10.880 - 10.938: 43.6867% ( 15) 00:30:30.248 10.938 - 10.996: 43.8512% ( 13) 00:30:30.248 10.996 - 11.055: 44.0916% ( 19) 00:30:30.248 11.055 - 11.113: 44.5724% ( 38) 00:30:30.248 11.113 - 11.171: 44.9393% ( 29) 00:30:30.248 11.171 - 11.229: 45.2935% ( 28) 00:30:30.248 11.229 - 11.287: 45.4453% ( 12) 00:30:30.248 11.287 - 11.345: 45.5845% ( 11) 00:30:30.248 11.345 - 11.404: 45.7237% ( 11) 00:30:30.248 11.404 - 11.462: 45.8122% ( 7) 00:30:30.248 11.462 - 11.520: 45.8882% ( 6) 00:30:30.248 11.520 - 11.578: 46.0526% ( 13) 00:30:30.248 11.578 - 11.636: 46.1032% ( 4) 00:30:30.248 11.636 - 11.695: 46.2677% ( 13) 00:30:30.248 11.695 - 11.753: 46.3563% ( 7) 00:30:30.248 11.753 - 11.811: 46.4448% ( 7) 00:30:30.248 11.811 - 11.869: 46.5840% ( 11) 00:30:30.248 11.869 - 11.927: 46.8244% ( 19) 00:30:30.248 11.927 - 11.985: 47.4190% ( 47) 00:30:30.248 11.985 - 12.044: 49.2535% ( 145) 00:30:30.248 12.044 - 12.102: 52.6063% ( 265) 00:30:30.248 12.102 - 12.160: 55.4150% ( 222) 00:30:30.248 12.160 - 12.218: 57.3001% ( 149) 00:30:30.248 12.218 - 12.276: 58.8816% ( 125) 00:30:30.248 12.276 - 12.335: 60.5263% ( 130) 00:30:30.248 12.335 - 12.393: 61.6776% ( 91) 00:30:30.248 12.393 - 12.451: 62.8036% ( 89) 00:30:30.248 12.451 - 12.509: 63.6387% ( 66) 00:30:30.248 12.509 - 12.567: 64.4104% ( 61) 00:30:30.248 12.567 - 12.625: 65.3340% ( 73) 00:30:30.248 12.625 - 12.684: 66.2449% ( 72) 00:30:30.248 12.684 - 12.742: 67.2571% ( 80) 00:30:30.248 12.742 - 12.800: 68.0795% ( 65) 00:30:30.248 12.800 - 12.858: 68.7880% ( 56) 00:30:30.248 12.858 - 12.916: 69.4079% ( 49) 00:30:30.248 12.916 - 12.975: 70.0152% ( 48) 00:30:30.248 12.975 - 13.033: 70.6098% ( 47) 00:30:30.248 13.033 - 13.091: 71.0906% ( 38) 00:30:30.248 13.091 - 13.149: 71.5714% ( 38) 00:30:30.248 13.149 - 13.207: 71.8623% ( 23) 00:30:30.248 13.207 - 13.265: 72.3052% ( 35) 00:30:30.248 13.265 - 13.324: 72.7480% ( 35) 00:30:30.248 13.324 - 13.382: 73.2540% ( 40) 00:30:30.248 13.382 - 13.440: 73.7601% ( 40) 00:30:30.248 13.440 - 13.498: 74.1776% ( 33) 00:30:30.248 13.498 - 13.556: 74.5825% ( 32) 00:30:30.248 13.556 - 13.615: 75.0886% ( 40) 00:30:30.248 13.615 - 13.673: 75.5061% ( 33) 00:30:30.248 13.673 - 13.731: 75.9489% ( 35) 00:30:30.248 13.731 - 13.789: 76.3917% ( 35) 00:30:30.248 13.789 - 13.847: 76.7080% ( 25) 00:30:30.248 13.847 - 13.905: 77.0369% ( 26) 00:30:30.248 13.905 - 13.964: 77.3785% ( 27) 00:30:30.248 13.964 - 14.022: 77.7075% ( 26) 00:30:30.248 14.022 - 14.080: 78.0744% ( 29) 00:30:30.248 14.080 - 14.138: 78.3654% ( 23) 00:30:30.248 14.138 - 14.196: 78.6817% ( 25) 00:30:30.248 14.196 - 14.255: 78.8841% ( 16) 00:30:30.248 14.255 - 14.313: 79.0992% ( 17) 00:30:30.248 14.313 - 14.371: 79.3269% ( 18) 00:30:30.248 14.371 - 14.429: 79.5040% ( 14) 00:30:30.248 14.429 - 14.487: 79.7191% ( 17) 00:30:30.248 14.487 - 14.545: 79.9089% ( 15) 00:30:30.248 14.545 - 14.604: 80.0987% ( 15) 00:30:30.248 14.604 - 14.662: 80.2379% ( 11) 00:30:30.248 14.662 - 14.720: 80.5668% ( 26) 00:30:30.248 14.720 - 14.778: 80.8578% ( 23) 00:30:30.248 14.778 - 14.836: 81.1108% ( 20) 00:30:30.248 14.836 - 14.895: 81.4145% ( 24) 00:30:30.248 14.895 - 15.011: 81.9459% ( 42) 00:30:30.248 15.011 - 15.127: 82.4772% ( 42) 00:30:30.248 15.127 - 15.244: 83.2363% ( 60) 00:30:30.248 15.244 - 15.360: 83.9828% ( 59) 00:30:30.248 15.360 - 15.476: 84.6533% ( 53) 00:30:30.248 15.476 - 15.593: 85.1974% ( 43) 00:30:30.248 15.593 - 15.709: 85.4884% ( 23) 00:30:30.248 15.709 - 15.825: 85.8047% ( 25) 00:30:30.248 15.825 - 15.942: 86.2222% ( 33) 00:30:30.248 15.942 - 16.058: 86.4499% ( 18) 00:30:30.248 16.058 - 16.175: 86.8421% ( 31) 00:30:30.248 16.175 - 16.291: 87.2849% ( 35) 00:30:30.248 16.291 - 16.407: 87.5886% ( 24) 00:30:30.248 16.407 - 16.524: 88.0314% ( 35) 00:30:30.248 16.524 - 16.640: 88.3350% ( 24) 00:30:30.248 16.640 - 16.756: 88.7146% ( 30) 00:30:30.248 16.756 - 16.873: 88.9550% ( 19) 00:30:30.248 16.873 - 16.989: 89.1447% ( 15) 00:30:30.248 16.989 - 17.105: 89.3725% ( 18) 00:30:30.248 17.105 - 17.222: 89.6129% ( 19) 00:30:30.248 17.222 - 17.338: 89.7900% ( 14) 00:30:30.248 17.338 - 17.455: 90.0051% ( 17) 00:30:30.248 17.455 - 17.571: 90.1948% ( 15) 00:30:30.248 17.571 - 17.687: 90.3340% ( 11) 00:30:30.248 17.687 - 17.804: 90.5111% ( 14) 00:30:30.248 17.804 - 17.920: 90.6883% ( 14) 00:30:30.248 17.920 - 18.036: 90.8021% ( 9) 00:30:30.248 18.036 - 18.153: 90.9160% ( 9) 00:30:30.248 18.153 - 18.269: 91.0552% ( 11) 00:30:30.248 18.269 - 18.385: 91.1184% ( 5) 00:30:30.248 18.385 - 18.502: 91.2323% ( 9) 00:30:30.248 18.502 - 18.618: 91.3968% ( 13) 00:30:30.248 18.618 - 18.735: 91.4980% ( 8) 00:30:30.248 18.735 - 18.851: 91.6371% ( 11) 00:30:30.248 18.851 - 18.967: 91.7510% ( 9) 00:30:30.248 18.967 - 19.084: 91.8649% ( 9) 00:30:30.248 19.084 - 19.200: 91.9914% ( 10) 00:30:30.248 19.200 - 19.316: 92.1559% ( 13) 00:30:30.248 19.316 - 19.433: 92.3077% ( 12) 00:30:30.248 19.433 - 19.549: 92.4469% ( 11) 00:30:30.248 19.549 - 19.665: 92.5607% ( 9) 00:30:30.248 19.665 - 19.782: 92.6746% ( 9) 00:30:30.248 19.782 - 19.898: 92.7505% ( 6) 00:30:30.248 19.898 - 20.015: 92.8517% ( 8) 00:30:30.248 20.015 - 20.131: 92.9276% ( 6) 00:30:30.248 20.131 - 20.247: 92.9656% ( 3) 00:30:30.248 20.247 - 20.364: 93.0162% ( 4) 00:30:30.248 20.364 - 20.480: 93.0921% ( 6) 00:30:30.248 20.480 - 20.596: 93.1301% ( 3) 00:30:30.248 20.596 - 20.713: 93.1933% ( 5) 00:30:30.248 20.713 - 20.829: 93.2439% ( 4) 00:30:30.248 20.829 - 20.945: 93.3451% ( 8) 00:30:30.248 20.945 - 21.062: 93.3957% ( 4) 00:30:30.248 21.062 - 21.178: 93.4337% ( 3) 00:30:30.248 21.178 - 21.295: 93.4464% ( 1) 00:30:30.248 21.295 - 21.411: 93.4843% ( 3) 00:30:30.248 21.411 - 21.527: 93.5729% ( 7) 00:30:30.248 21.527 - 21.644: 93.6108% ( 3) 00:30:30.248 21.644 - 21.760: 93.6994% ( 7) 00:30:30.248 21.760 - 21.876: 93.7247% ( 2) 00:30:30.248 21.876 - 21.993: 93.7627% ( 3) 00:30:30.248 21.993 - 22.109: 93.8133% ( 4) 00:30:30.248 22.109 - 22.225: 93.8386% ( 2) 00:30:30.248 22.225 - 22.342: 93.8765% ( 3) 00:30:30.248 22.342 - 22.458: 93.9018% ( 2) 00:30:30.248 22.458 - 22.575: 94.0283% ( 10) 00:30:30.248 22.575 - 22.691: 94.1043% ( 6) 00:30:30.248 22.691 - 22.807: 94.2055% ( 8) 00:30:30.248 22.807 - 22.924: 94.2687% ( 5) 00:30:30.248 22.924 - 23.040: 94.2940% ( 2) 00:30:30.248 23.040 - 23.156: 94.4079% ( 9) 00:30:30.248 23.156 - 23.273: 94.4712% ( 5) 00:30:30.248 23.273 - 23.389: 94.5091% ( 3) 00:30:30.248 23.389 - 23.505: 94.5344% ( 2) 00:30:30.248 23.505 - 23.622: 94.6483% ( 9) 00:30:30.248 23.622 - 23.738: 94.7368% ( 7) 00:30:30.248 23.738 - 23.855: 94.7621% ( 2) 00:30:30.248 23.855 - 23.971: 94.8128% ( 4) 00:30:30.248 23.971 - 24.087: 94.9140% ( 8) 00:30:30.248 24.087 - 24.204: 94.9772% ( 5) 00:30:30.248 24.204 - 24.320: 95.0405% ( 5) 00:30:30.248 24.320 - 24.436: 95.1290% ( 7) 00:30:30.248 24.436 - 24.553: 95.2176% ( 7) 00:30:30.249 24.553 - 24.669: 95.3062% ( 7) 00:30:30.249 24.669 - 24.785: 95.4706% ( 13) 00:30:30.249 24.785 - 24.902: 95.5466% ( 6) 00:30:30.249 24.902 - 25.018: 95.6478% ( 8) 00:30:30.249 25.018 - 25.135: 95.8755% ( 18) 00:30:30.249 25.135 - 25.251: 96.0906% ( 17) 00:30:30.249 25.251 - 25.367: 96.3057% ( 17) 00:30:30.249 25.367 - 25.484: 96.5207% ( 17) 00:30:30.249 25.484 - 25.600: 96.6852% ( 13) 00:30:30.249 25.600 - 25.716: 96.7991% ( 9) 00:30:30.249 25.716 - 25.833: 96.9383% ( 11) 00:30:30.249 25.833 - 25.949: 96.9889% ( 4) 00:30:30.249 25.949 - 26.065: 97.0901% ( 8) 00:30:30.249 26.065 - 26.182: 97.1660% ( 6) 00:30:30.249 26.182 - 26.298: 97.2925% ( 10) 00:30:30.249 26.298 - 26.415: 97.3431% ( 4) 00:30:30.249 26.415 - 26.531: 97.4317% ( 7) 00:30:30.249 26.531 - 26.647: 97.5076% ( 6) 00:30:30.249 26.647 - 26.764: 97.6215% ( 9) 00:30:30.249 26.764 - 26.880: 97.7733% ( 12) 00:30:30.249 26.880 - 26.996: 97.8998% ( 10) 00:30:30.249 26.996 - 27.113: 97.9631% ( 5) 00:30:30.249 27.113 - 27.229: 98.0137% ( 4) 00:30:30.249 27.229 - 27.345: 98.0769% ( 5) 00:30:30.249 27.345 - 27.462: 98.1655% ( 7) 00:30:30.249 27.462 - 27.578: 98.2287% ( 5) 00:30:30.249 27.578 - 27.695: 98.2794% ( 4) 00:30:30.249 27.811 - 27.927: 98.3047% ( 2) 00:30:30.249 27.927 - 28.044: 98.3173% ( 1) 00:30:30.249 28.044 - 28.160: 98.3553% ( 3) 00:30:30.249 28.160 - 28.276: 98.3932% ( 3) 00:30:30.249 28.276 - 28.393: 98.4185% ( 2) 00:30:30.249 28.393 - 28.509: 98.4312% ( 1) 00:30:30.249 28.742 - 28.858: 98.4565% ( 2) 00:30:30.249 28.858 - 28.975: 98.4818% ( 2) 00:30:30.249 29.091 - 29.207: 98.4944% ( 1) 00:30:30.249 29.207 - 29.324: 98.5197% ( 2) 00:30:30.249 29.324 - 29.440: 98.5577% ( 3) 00:30:30.249 29.440 - 29.556: 98.5830% ( 2) 00:30:30.249 29.556 - 29.673: 98.5956% ( 1) 00:30:30.249 29.673 - 29.789: 98.6083% ( 1) 00:30:30.249 29.789 - 30.022: 98.6463% ( 3) 00:30:30.249 30.022 - 30.255: 98.6716% ( 2) 00:30:30.249 30.255 - 30.487: 98.6969% ( 2) 00:30:30.249 30.720 - 30.953: 98.7475% ( 4) 00:30:30.249 30.953 - 31.185: 98.7854% ( 3) 00:30:30.249 31.185 - 31.418: 98.7981% ( 1) 00:30:30.249 31.418 - 31.651: 98.8360% ( 3) 00:30:30.249 32.116 - 32.349: 98.8487% ( 1) 00:30:30.249 32.349 - 32.582: 98.8613% ( 1) 00:30:30.249 32.815 - 33.047: 98.8740% ( 1) 00:30:30.249 33.047 - 33.280: 98.8993% ( 2) 00:30:30.249 33.280 - 33.513: 98.9119% ( 1) 00:30:30.249 33.513 - 33.745: 98.9372% ( 2) 00:30:30.249 33.745 - 33.978: 98.9499% ( 1) 00:30:30.249 33.978 - 34.211: 98.9626% ( 1) 00:30:30.249 34.211 - 34.444: 98.9752% ( 1) 00:30:30.249 34.444 - 34.676: 98.9879% ( 1) 00:30:30.249 34.909 - 35.142: 99.0005% ( 1) 00:30:30.249 35.375 - 35.607: 99.0132% ( 1) 00:30:30.249 35.840 - 36.073: 99.0385% ( 2) 00:30:30.249 37.236 - 37.469: 99.0638% ( 2) 00:30:30.249 37.702 - 37.935: 99.0764% ( 1) 00:30:30.249 37.935 - 38.167: 99.0891% ( 1) 00:30:30.249 39.098 - 39.331: 99.1017% ( 1) 00:30:30.249 39.331 - 39.564: 99.1144% ( 1) 00:30:30.249 39.564 - 39.796: 99.1650% ( 4) 00:30:30.249 39.796 - 40.029: 99.2029% ( 3) 00:30:30.249 40.029 - 40.262: 99.2409% ( 3) 00:30:30.249 40.262 - 40.495: 99.2788% ( 3) 00:30:30.249 40.495 - 40.727: 99.3041% ( 2) 00:30:30.249 40.727 - 40.960: 99.3421% ( 3) 00:30:30.249 40.960 - 41.193: 99.3548% ( 1) 00:30:30.249 41.193 - 41.425: 99.3674% ( 1) 00:30:30.249 41.425 - 41.658: 99.4054% ( 3) 00:30:30.249 41.658 - 41.891: 99.4307% ( 2) 00:30:30.249 41.891 - 42.124: 99.4560% ( 2) 00:30:30.249 42.124 - 42.356: 99.5319% ( 6) 00:30:30.249 42.356 - 42.589: 99.5825% ( 4) 00:30:30.249 42.589 - 42.822: 99.5951% ( 1) 00:30:30.249 42.822 - 43.055: 99.6204% ( 2) 00:30:30.249 43.055 - 43.287: 99.6457% ( 2) 00:30:30.249 43.985 - 44.218: 99.6584% ( 1) 00:30:30.249 44.451 - 44.684: 99.6711% ( 1) 00:30:30.249 44.684 - 44.916: 99.7217% ( 4) 00:30:30.249 44.916 - 45.149: 99.7470% ( 2) 00:30:30.249 45.382 - 45.615: 99.7596% ( 1) 00:30:30.249 45.615 - 45.847: 99.7723% ( 1) 00:30:30.249 46.080 - 46.313: 99.7849% ( 1) 00:30:30.249 46.313 - 46.545: 99.7976% ( 1) 00:30:30.249 46.778 - 47.011: 99.8102% ( 1) 00:30:30.249 47.244 - 47.476: 99.8229% ( 1) 00:30:30.249 47.709 - 47.942: 99.8355% ( 1) 00:30:30.249 47.942 - 48.175: 99.8608% ( 2) 00:30:30.249 48.407 - 48.640: 99.8735% ( 1) 00:30:30.249 49.571 - 49.804: 99.8861% ( 1) 00:30:30.249 50.502 - 50.735: 99.8988% ( 1) 00:30:30.249 51.433 - 51.665: 99.9114% ( 1) 00:30:30.249 52.364 - 52.596: 99.9241% ( 1) 00:30:30.249 56.087 - 56.320: 99.9367% ( 1) 00:30:30.249 64.233 - 64.698: 99.9494% ( 1) 00:30:30.249 67.491 - 67.956: 99.9620% ( 1) 00:30:30.249 68.422 - 68.887: 99.9747% ( 1) 00:30:30.249 74.007 - 74.473: 99.9873% ( 1) 00:30:30.249 82.385 - 82.851: 100.0000% ( 1) 00:30:30.249 00:30:30.249 00:30:30.249 real 0m1.347s 00:30:30.249 user 0m1.156s 00:30:30.249 sys 0m0.108s 00:30:30.249 07:31:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:30.249 07:31:03 -- common/autotest_common.sh@10 -- # set +x 00:30:30.249 ************************************ 00:30:30.249 END TEST nvme_overhead 00:30:30.249 ************************************ 00:30:30.249 07:31:03 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:30.249 07:31:03 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:30:30.249 07:31:03 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:30.249 07:31:03 -- common/autotest_common.sh@10 -- # set +x 00:30:30.249 ************************************ 00:30:30.249 START TEST nvme_arbitration 00:30:30.249 ************************************ 00:30:30.249 07:31:03 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:30:34.472 Initializing NVMe Controllers 00:30:34.472 Attached to 0000:00:06.0 00:30:34.472 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:30:34.472 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:30:34.472 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:30:34.472 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:30:34.472 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:30:34.472 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:30:34.472 Initialization complete. Launching workers. 00:30:34.472 Starting thread on core 1 with urgent priority queue 00:30:34.472 Starting thread on core 2 with urgent priority queue 00:30:34.472 Starting thread on core 3 with urgent priority queue 00:30:34.472 Starting thread on core 0 with urgent priority queue 00:30:34.472 QEMU NVMe Ctrl (12340 ) core 0: 1514.67 IO/s 66.02 secs/100000 ios 00:30:34.472 QEMU NVMe Ctrl (12340 ) core 1: 1344.00 IO/s 74.40 secs/100000 ios 00:30:34.472 QEMU NVMe Ctrl (12340 ) core 2: 618.67 IO/s 161.64 secs/100000 ios 00:30:34.472 QEMU NVMe Ctrl (12340 ) core 3: 661.33 IO/s 151.21 secs/100000 ios 00:30:34.472 ======================================================== 00:30:34.472 00:30:34.472 00:30:34.472 real 0m3.495s 00:30:34.472 user 0m9.532s 00:30:34.472 sys 0m0.156s 00:30:34.472 07:31:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:34.472 ************************************ 00:30:34.472 END TEST nvme_arbitration 00:30:34.472 07:31:07 -- common/autotest_common.sh@10 -- # set +x 00:30:34.472 ************************************ 00:30:34.472 07:31:07 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:30:34.472 07:31:07 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:30:34.472 07:31:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:34.472 07:31:07 -- common/autotest_common.sh@10 -- # set +x 00:30:34.472 ************************************ 00:30:34.472 START TEST nvme_single_aen 00:30:34.472 ************************************ 00:30:34.472 07:31:07 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:30:34.472 Asynchronous Event Request test 00:30:34.472 Attached to 0000:00:06.0 00:30:34.472 Reset controller to setup AER completions for this process 00:30:34.472 Registering asynchronous event callbacks... 00:30:34.472 Getting orig temperature thresholds of all controllers 00:30:34.472 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:34.472 Setting all controllers temperature threshold low to trigger AER 00:30:34.472 Waiting for all controllers temperature threshold to be set lower 00:30:34.472 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:34.472 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:30:34.472 Waiting for all controllers to trigger AER and reset threshold 00:30:34.472 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:34.472 Cleaning up... 00:30:34.472 00:30:34.472 real 0m0.329s 00:30:34.472 user 0m0.113s 00:30:34.472 sys 0m0.118s 00:30:34.472 07:31:07 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:34.472 07:31:07 -- common/autotest_common.sh@10 -- # set +x 00:30:34.472 ************************************ 00:30:34.472 END TEST nvme_single_aen 00:30:34.472 ************************************ 00:30:34.472 07:31:07 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:30:34.472 07:31:07 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:34.472 07:31:07 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:34.472 07:31:07 -- common/autotest_common.sh@10 -- # set +x 00:30:34.472 ************************************ 00:30:34.472 START TEST nvme_doorbell_aers 00:30:34.472 ************************************ 00:30:34.472 07:31:07 -- common/autotest_common.sh@1102 -- # nvme_doorbell_aers 00:30:34.472 07:31:07 -- nvme/nvme.sh@70 -- # bdfs=() 00:30:34.472 07:31:07 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:30:34.472 07:31:07 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:30:34.472 07:31:07 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:30:34.472 07:31:07 -- common/autotest_common.sh@1496 -- # bdfs=() 00:30:34.472 07:31:07 -- common/autotest_common.sh@1496 -- # local bdfs 00:30:34.472 07:31:07 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:34.472 07:31:07 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:34.472 07:31:07 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:30:34.472 07:31:07 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:30:34.472 07:31:07 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 00:30:34.472 07:31:07 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:30:34.472 07:31:07 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:30:34.472 [2024-02-13 07:31:08.162483] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 145668) is not found. Dropping the request. 00:30:44.452 Executing: test_write_invalid_db 00:30:44.452 Waiting for AER completion... 00:30:44.452 Failure: test_write_invalid_db 00:30:44.452 00:30:44.452 Executing: test_invalid_db_write_overflow_sq 00:30:44.452 Waiting for AER completion... 00:30:44.452 Failure: test_invalid_db_write_overflow_sq 00:30:44.452 00:30:44.452 Executing: test_invalid_db_write_overflow_cq 00:30:44.453 Waiting for AER completion... 00:30:44.453 Failure: test_invalid_db_write_overflow_cq 00:30:44.453 00:30:44.453 00:30:44.453 real 0m10.115s 00:30:44.453 user 0m8.495s 00:30:44.453 sys 0m1.565s 00:30:44.453 07:31:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:44.453 07:31:17 -- common/autotest_common.sh@10 -- # set +x 00:30:44.453 ************************************ 00:30:44.453 END TEST nvme_doorbell_aers 00:30:44.453 ************************************ 00:30:44.453 07:31:17 -- nvme/nvme.sh@97 -- # uname 00:30:44.453 07:31:17 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:30:44.453 07:31:17 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:30:44.453 07:31:17 -- common/autotest_common.sh@1075 -- # '[' 6 -le 1 ']' 00:30:44.453 07:31:17 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:44.453 07:31:17 -- common/autotest_common.sh@10 -- # set +x 00:30:44.453 ************************************ 00:30:44.453 START TEST nvme_multi_aen 00:30:44.453 ************************************ 00:30:44.453 07:31:17 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:30:44.711 [2024-02-13 07:31:18.228710] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 145668) is not found. Dropping the request. 00:30:44.711 [2024-02-13 07:31:18.228898] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 145668) is not found. Dropping the request. 00:30:44.711 [2024-02-13 07:31:18.228935] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 145668) is not found. Dropping the request. 00:30:44.711 Child process pid: 145883 00:30:44.970 [Child] Asynchronous Event Request test 00:30:44.970 [Child] Attached to 0000:00:06.0 00:30:44.970 [Child] Registering asynchronous event callbacks... 00:30:44.970 [Child] Getting orig temperature thresholds of all controllers 00:30:44.970 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:44.970 [Child] Waiting for all controllers to trigger AER and reset threshold 00:30:44.970 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:44.970 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:44.970 [Child] Cleaning up... 00:30:44.970 Asynchronous Event Request test 00:30:44.970 Attached to 0000:00:06.0 00:30:44.970 Reset controller to setup AER completions for this process 00:30:44.970 Registering asynchronous event callbacks... 00:30:44.970 Getting orig temperature thresholds of all controllers 00:30:44.970 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:30:44.970 Setting all controllers temperature threshold low to trigger AER 00:30:44.970 Waiting for all controllers temperature threshold to be set lower 00:30:44.970 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:30:44.970 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:30:44.970 Waiting for all controllers to trigger AER and reset threshold 00:30:44.970 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:30:44.970 Cleaning up... 00:30:44.970 00:30:44.970 real 0m0.638s 00:30:44.970 user 0m0.236s 00:30:44.970 sys 0m0.229s 00:30:44.970 07:31:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:44.970 ************************************ 00:30:44.970 END TEST nvme_multi_aen 00:30:44.970 07:31:18 -- common/autotest_common.sh@10 -- # set +x 00:30:44.970 ************************************ 00:30:44.970 07:31:18 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:30:44.970 07:31:18 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:30:44.970 07:31:18 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:44.970 07:31:18 -- common/autotest_common.sh@10 -- # set +x 00:30:44.970 ************************************ 00:30:44.970 START TEST nvme_startup 00:30:44.970 ************************************ 00:30:44.970 07:31:18 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:30:45.538 Initializing NVMe Controllers 00:30:45.538 Attached to 0000:00:06.0 00:30:45.538 Initialization complete. 00:30:45.538 Time used:211741.188 (us). 00:30:45.538 00:30:45.538 real 0m0.302s 00:30:45.538 user 0m0.104s 00:30:45.538 sys 0m0.120s 00:30:45.538 07:31:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:45.538 07:31:18 -- common/autotest_common.sh@10 -- # set +x 00:30:45.538 ************************************ 00:30:45.538 END TEST nvme_startup 00:30:45.538 ************************************ 00:30:45.538 07:31:19 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:30:45.538 07:31:19 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:45.538 07:31:19 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:45.538 07:31:19 -- common/autotest_common.sh@10 -- # set +x 00:30:45.538 ************************************ 00:30:45.538 START TEST nvme_multi_secondary 00:30:45.538 ************************************ 00:30:45.538 07:31:19 -- common/autotest_common.sh@1102 -- # nvme_multi_secondary 00:30:45.538 07:31:19 -- nvme/nvme.sh@52 -- # pid0=145942 00:30:45.538 07:31:19 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:30:45.538 07:31:19 -- nvme/nvme.sh@54 -- # pid1=145943 00:30:45.538 07:31:19 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:30:45.538 07:31:19 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:30:48.826 Initializing NVMe Controllers 00:30:48.826 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:48.826 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:30:48.826 Initialization complete. Launching workers. 00:30:48.826 ======================================================== 00:30:48.826 Latency(us) 00:30:48.826 Device Information : IOPS MiB/s Average min max 00:30:48.826 PCIE (0000:00:06.0) NSID 1 from core 2: 14238.66 55.62 1123.36 141.46 20562.85 00:30:48.826 ======================================================== 00:30:48.826 Total : 14238.66 55.62 1123.36 141.46 20562.85 00:30:48.826 00:30:48.826 07:31:22 -- nvme/nvme.sh@56 -- # wait 145942 00:30:49.084 Initializing NVMe Controllers 00:30:49.084 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:49.084 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:30:49.084 Initialization complete. Launching workers. 00:30:49.084 ======================================================== 00:30:49.084 Latency(us) 00:30:49.084 Device Information : IOPS MiB/s Average min max 00:30:49.084 PCIE (0000:00:06.0) NSID 1 from core 1: 32835.66 128.26 486.93 110.60 5655.62 00:30:49.084 ======================================================== 00:30:49.084 Total : 32835.66 128.26 486.93 110.60 5655.62 00:30:49.084 00:30:50.986 Initializing NVMe Controllers 00:30:50.986 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:50.986 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:50.986 Initialization complete. Launching workers. 00:30:50.986 ======================================================== 00:30:50.986 Latency(us) 00:30:50.986 Device Information : IOPS MiB/s Average min max 00:30:50.986 PCIE (0000:00:06.0) NSID 1 from core 0: 42067.52 164.33 380.01 99.34 5817.36 00:30:50.986 ======================================================== 00:30:50.986 Total : 42067.52 164.33 380.01 99.34 5817.36 00:30:50.986 00:30:50.986 07:31:24 -- nvme/nvme.sh@57 -- # wait 145943 00:30:50.986 07:31:24 -- nvme/nvme.sh@61 -- # pid0=146045 00:30:50.986 07:31:24 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:30:50.986 07:31:24 -- nvme/nvme.sh@63 -- # pid1=146047 00:30:50.986 07:31:24 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:30:50.986 07:31:24 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:30:54.276 Initializing NVMe Controllers 00:30:54.276 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:54.276 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:30:54.276 Initialization complete. Launching workers. 00:30:54.276 ======================================================== 00:30:54.276 Latency(us) 00:30:54.276 Device Information : IOPS MiB/s Average min max 00:30:54.276 PCIE (0000:00:06.0) NSID 1 from core 0: 35580.33 138.99 449.31 118.73 1712.59 00:30:54.276 ======================================================== 00:30:54.276 Total : 35580.33 138.99 449.31 118.73 1712.59 00:30:54.276 00:30:54.546 Initializing NVMe Controllers 00:30:54.546 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:54.546 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:30:54.546 Initialization complete. Launching workers. 00:30:54.546 ======================================================== 00:30:54.546 Latency(us) 00:30:54.546 Device Information : IOPS MiB/s Average min max 00:30:54.546 PCIE (0000:00:06.0) NSID 1 from core 1: 34367.62 134.25 465.23 104.66 1610.41 00:30:54.546 ======================================================== 00:30:54.546 Total : 34367.62 134.25 465.23 104.66 1610.41 00:30:54.546 00:30:56.464 Initializing NVMe Controllers 00:30:56.464 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:30:56.464 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:30:56.464 Initialization complete. Launching workers. 00:30:56.464 ======================================================== 00:30:56.464 Latency(us) 00:30:56.464 Device Information : IOPS MiB/s Average min max 00:30:56.464 PCIE (0000:00:06.0) NSID 1 from core 2: 18430.46 71.99 867.52 128.44 17206.58 00:30:56.464 ======================================================== 00:30:56.464 Total : 18430.46 71.99 867.52 128.44 17206.58 00:30:56.464 00:30:56.464 07:31:30 -- nvme/nvme.sh@65 -- # wait 146045 00:30:56.464 07:31:30 -- nvme/nvme.sh@66 -- # wait 146047 00:30:56.464 00:30:56.464 real 0m10.999s 00:30:56.464 user 0m18.713s 00:30:56.464 sys 0m0.861s 00:30:56.464 07:31:30 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:30:56.464 07:31:30 -- common/autotest_common.sh@10 -- # set +x 00:30:56.464 ************************************ 00:30:56.464 END TEST nvme_multi_secondary 00:30:56.464 ************************************ 00:30:56.464 07:31:30 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:30:56.464 07:31:30 -- nvme/nvme.sh@102 -- # kill_stub 00:30:56.464 07:31:30 -- common/autotest_common.sh@1063 -- # [[ -e /proc/145212 ]] 00:30:56.464 07:31:30 -- common/autotest_common.sh@1064 -- # kill 145212 00:30:56.464 07:31:30 -- common/autotest_common.sh@1065 -- # wait 145212 00:30:57.401 [2024-02-13 07:31:31.036081] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 145882) is not found. Dropping the request. 00:30:57.401 [2024-02-13 07:31:31.036189] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 145882) is not found. Dropping the request. 00:30:57.401 [2024-02-13 07:31:31.036234] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 145882) is not found. Dropping the request. 00:30:57.401 [2024-02-13 07:31:31.036277] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 145882) is not found. Dropping the request. 00:30:57.660 07:31:31 -- common/autotest_common.sh@1067 -- # rm -f /var/run/spdk_stub0 00:30:57.660 07:31:31 -- common/autotest_common.sh@1071 -- # echo 2 00:30:57.660 07:31:31 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:30:57.660 07:31:31 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:30:57.660 07:31:31 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:30:57.660 07:31:31 -- common/autotest_common.sh@10 -- # set +x 00:30:57.660 ************************************ 00:30:57.660 START TEST bdev_nvme_reset_stuck_adm_cmd 00:30:57.660 ************************************ 00:30:57.660 07:31:31 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:30:57.919 * Looking for test storage... 00:30:57.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:30:57.919 07:31:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:30:57.919 07:31:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:30:57.919 07:31:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:30:57.919 07:31:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:30:57.919 07:31:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:30:57.919 07:31:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:30:57.919 07:31:31 -- common/autotest_common.sh@1507 -- # bdfs=() 00:30:57.919 07:31:31 -- common/autotest_common.sh@1507 -- # local bdfs 00:30:57.919 07:31:31 -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:30:57.919 07:31:31 -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:30:57.919 07:31:31 -- common/autotest_common.sh@1496 -- # bdfs=() 00:30:57.919 07:31:31 -- common/autotest_common.sh@1496 -- # local bdfs 00:30:57.919 07:31:31 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:57.919 07:31:31 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:30:57.919 07:31:31 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:30:57.919 07:31:31 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:30:57.919 07:31:31 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 00:30:57.919 07:31:31 -- common/autotest_common.sh@1510 -- # echo 0000:00:06.0 00:30:57.919 07:31:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:30:57.919 07:31:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:30:57.919 07:31:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:30:57.919 07:31:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=146209 00:30:57.919 07:31:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:57.919 07:31:31 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 146209 00:30:57.919 07:31:31 -- common/autotest_common.sh@817 -- # '[' -z 146209 ']' 00:30:57.919 07:31:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.919 07:31:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:57.919 07:31:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.919 07:31:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:57.919 07:31:31 -- common/autotest_common.sh@10 -- # set +x 00:30:57.919 [2024-02-13 07:31:31.506908] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:30:57.919 [2024-02-13 07:31:31.507080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146209 ] 00:30:58.178 [2024-02-13 07:31:31.703264] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:58.437 [2024-02-13 07:31:31.986494] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:58.437 [2024-02-13 07:31:31.986854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:58.437 [2024-02-13 07:31:31.986948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:58.437 [2024-02-13 07:31:31.987230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:58.437 [2024-02-13 07:31:31.987238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.817 07:31:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:59.817 07:31:33 -- common/autotest_common.sh@850 -- # return 0 00:30:59.817 07:31:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:30:59.817 07:31:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:59.817 07:31:33 -- common/autotest_common.sh@10 -- # set +x 00:30:59.817 nvme0n1 00:30:59.817 07:31:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:59.817 07:31:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:30:59.817 07:31:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_PmhHh.txt 00:30:59.817 07:31:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:30:59.817 07:31:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:30:59.817 07:31:33 -- common/autotest_common.sh@10 -- # set +x 00:30:59.817 true 00:30:59.817 07:31:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:30:59.817 07:31:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:30:59.817 07:31:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1707809493 00:30:59.817 07:31:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=146252 00:30:59.817 07:31:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:30:59.817 07:31:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:30:59.817 07:31:33 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:31:01.723 07:31:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.723 07:31:35 -- common/autotest_common.sh@10 -- # set +x 00:31:01.723 [2024-02-13 07:31:35.317971] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:31:01.723 [2024-02-13 07:31:35.318513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.723 [2024-02-13 07:31:35.318609] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:31:01.723 [2024-02-13 07:31:35.318657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.723 [2024-02-13 07:31:35.321141] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:01.723 07:31:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.723 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 146252 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 146252 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 146252 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:01.723 07:31:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:01.723 07:31:35 -- common/autotest_common.sh@10 -- # set +x 00:31:01.723 07:31:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_PmhHh.txt 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:31:01.723 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:31:01.983 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:31:01.983 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:31:01.983 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:31:01.983 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:31:01.983 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:31:01.983 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:31:01.983 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_PmhHh.txt 00:31:01.983 07:31:35 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 146209 00:31:01.983 07:31:35 -- common/autotest_common.sh@924 -- # '[' -z 146209 ']' 00:31:01.983 07:31:35 -- common/autotest_common.sh@928 -- # kill -0 146209 00:31:01.983 07:31:35 -- common/autotest_common.sh@929 -- # uname 00:31:01.983 07:31:35 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:31:01.983 07:31:35 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 146209 00:31:01.983 07:31:35 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:31:01.983 07:31:35 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:31:01.983 killing process with pid 146209 00:31:01.983 07:31:35 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 146209' 00:31:01.983 07:31:35 -- common/autotest_common.sh@943 -- # kill 146209 00:31:01.983 07:31:35 -- common/autotest_common.sh@948 -- # wait 146209 00:31:03.888 07:31:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:31:03.888 07:31:37 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:31:03.888 ************************************ 00:31:03.888 END TEST bdev_nvme_reset_stuck_adm_cmd 00:31:03.888 ************************************ 00:31:03.888 00:31:03.888 real 0m6.183s 00:31:03.888 user 0m21.837s 00:31:03.888 sys 0m0.745s 00:31:03.888 07:31:37 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:03.888 07:31:37 -- common/autotest_common.sh@10 -- # set +x 00:31:03.888 07:31:37 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:31:03.888 07:31:37 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:31:03.888 07:31:37 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:03.888 07:31:37 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:03.888 07:31:37 -- common/autotest_common.sh@10 -- # set +x 00:31:03.888 ************************************ 00:31:03.888 START TEST nvme_fio 00:31:03.888 ************************************ 00:31:03.888 07:31:37 -- common/autotest_common.sh@1102 -- # nvme_fio_test 00:31:03.888 07:31:37 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:31:03.888 07:31:37 -- nvme/nvme.sh@32 -- # ran_fio=false 00:31:03.888 07:31:37 -- nvme/nvme.sh@33 -- # bdfs=($(get_nvme_bdfs)) 00:31:03.888 07:31:37 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:31:03.888 07:31:37 -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:03.888 07:31:37 -- common/autotest_common.sh@1496 -- # local bdfs 00:31:03.888 07:31:37 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:03.888 07:31:37 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:03.888 07:31:37 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:04.147 07:31:37 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:31:04.147 07:31:37 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 00:31:04.147 07:31:37 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:31:04.147 07:31:37 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:31:04.147 07:31:37 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:04.147 07:31:37 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:31:04.406 07:31:37 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:31:04.406 07:31:37 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:31:04.665 07:31:38 -- nvme/nvme.sh@41 -- # bs=4096 00:31:04.665 07:31:38 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:04.665 07:31:38 -- common/autotest_common.sh@1337 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:04.665 07:31:38 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:31:04.665 07:31:38 -- common/autotest_common.sh@1316 -- # sanitizers=(libasan libclang_rt.asan) 00:31:04.665 07:31:38 -- common/autotest_common.sh@1316 -- # local sanitizers 00:31:04.665 07:31:38 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:04.665 07:31:38 -- common/autotest_common.sh@1318 -- # shift 00:31:04.665 07:31:38 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:31:04.665 07:31:38 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:04.665 07:31:38 -- common/autotest_common.sh@1322 -- # grep libasan 00:31:04.665 07:31:38 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:31:04.665 07:31:38 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:04.665 07:31:38 -- common/autotest_common.sh@1322 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:31:04.665 07:31:38 -- common/autotest_common.sh@1323 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:31:04.665 07:31:38 -- common/autotest_common.sh@1324 -- # break 00:31:04.665 07:31:38 -- common/autotest_common.sh@1329 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:31:04.665 07:31:38 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:31:04.665 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:04.665 fio-3.28 00:31:04.665 Starting 1 thread 00:31:07.957 00:31:07.957 test: (groupid=0, jobs=1): err= 0: pid=146405: Tue Feb 13 07:31:41 2024 00:31:07.957 read: IOPS=13.9k, BW=54.4MiB/s (57.1MB/s)(109MiB/2001msec) 00:31:07.957 slat (nsec): min=3872, max=81385, avg=6691.98, stdev=5064.68 00:31:07.957 clat (usec): min=292, max=8302, avg=4566.39, stdev=403.60 00:31:07.957 lat (usec): min=300, max=8372, avg=4573.08, stdev=403.96 00:31:07.957 clat percentiles (usec): 00:31:07.957 | 1.00th=[ 3523], 5.00th=[ 3818], 10.00th=[ 4015], 20.00th=[ 4228], 00:31:07.957 | 30.00th=[ 4424], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:31:07.957 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5080], 00:31:07.957 | 99.00th=[ 5276], 99.50th=[ 5407], 99.90th=[ 6194], 99.95th=[ 7373], 00:31:07.957 | 99.99th=[ 8225] 00:31:07.957 bw ( KiB/s): min=53048, max=58632, per=100.00%, avg=55810.67, stdev=2792.46, samples=3 00:31:07.957 iops : min=13262, max=14658, avg=13952.67, stdev=698.12, samples=3 00:31:07.957 write: IOPS=13.9k, BW=54.5MiB/s (57.1MB/s)(109MiB/2001msec); 0 zone resets 00:31:07.957 slat (nsec): min=3989, max=79560, avg=6915.06, stdev=5120.31 00:31:07.957 clat (usec): min=341, max=8220, avg=4583.84, stdev=409.33 00:31:07.957 lat (usec): min=349, max=8230, avg=4590.76, stdev=409.67 00:31:07.957 clat percentiles (usec): 00:31:07.957 | 1.00th=[ 3523], 5.00th=[ 3851], 10.00th=[ 4047], 20.00th=[ 4293], 00:31:07.957 | 30.00th=[ 4424], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4752], 00:31:07.957 | 70.00th=[ 4817], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5080], 00:31:07.957 | 99.00th=[ 5276], 99.50th=[ 5407], 99.90th=[ 6325], 99.95th=[ 7373], 00:31:07.957 | 99.99th=[ 8094] 00:31:07.957 bw ( KiB/s): min=53272, max=58200, per=100.00%, avg=55837.33, stdev=2470.24, samples=3 00:31:07.957 iops : min=13318, max=14550, avg=13959.33, stdev=617.56, samples=3 00:31:07.957 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:31:07.957 lat (msec) : 2=0.04%, 4=8.94%, 10=90.98% 00:31:07.957 cpu : usr=100.15%, sys=0.00%, ctx=4, majf=0, minf=37 00:31:07.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:07.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:07.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:07.957 issued rwts: total=27882,27904,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:07.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:07.957 00:31:07.957 Run status group 0 (all jobs): 00:31:07.957 READ: bw=54.4MiB/s (57.1MB/s), 54.4MiB/s-54.4MiB/s (57.1MB/s-57.1MB/s), io=109MiB (114MB), run=2001-2001msec 00:31:07.957 WRITE: bw=54.5MiB/s (57.1MB/s), 54.5MiB/s-54.5MiB/s (57.1MB/s-57.1MB/s), io=109MiB (114MB), run=2001-2001msec 00:31:07.957 ----------------------------------------------------- 00:31:07.957 Suppressions used: 00:31:07.957 count bytes template 00:31:07.957 2 38 /usr/src/fio/parse.c 00:31:07.957 ----------------------------------------------------- 00:31:07.957 00:31:07.957 07:31:41 -- nvme/nvme.sh@44 -- # ran_fio=true 00:31:07.957 07:31:41 -- nvme/nvme.sh@46 -- # true 00:31:07.957 00:31:07.957 real 0m4.025s 00:31:07.957 user 0m3.325s 00:31:07.957 sys 0m0.369s 00:31:07.957 07:31:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:07.957 ************************************ 00:31:07.957 END TEST nvme_fio 00:31:07.957 ************************************ 00:31:07.957 07:31:41 -- common/autotest_common.sh@10 -- # set +x 00:31:07.957 00:31:07.957 real 0m48.796s 00:31:07.957 user 2m7.521s 00:31:07.957 sys 0m9.136s 00:31:07.957 07:31:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:07.957 07:31:41 -- common/autotest_common.sh@10 -- # set +x 00:31:07.957 ************************************ 00:31:07.957 END TEST nvme 00:31:07.957 ************************************ 00:31:08.217 07:31:41 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:31:08.217 07:31:41 -- spdk/autotest.sh@227 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:31:08.217 07:31:41 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:08.217 07:31:41 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:08.217 07:31:41 -- common/autotest_common.sh@10 -- # set +x 00:31:08.217 ************************************ 00:31:08.217 START TEST nvme_scc 00:31:08.217 ************************************ 00:31:08.217 07:31:41 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:31:08.217 * Looking for test storage... 00:31:08.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:08.217 07:31:41 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:08.217 07:31:41 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:31:08.217 07:31:41 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:31:08.217 07:31:41 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:31:08.217 07:31:41 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:08.217 07:31:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:08.217 07:31:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:08.217 07:31:41 -- nvme/functions.sh@10 -- # ctrls=() 00:31:08.217 07:31:41 -- nvme/functions.sh@10 -- # declare -A ctrls 00:31:08.217 07:31:41 -- nvme/functions.sh@11 -- # nvmes=() 00:31:08.217 07:31:41 -- nvme/functions.sh@11 -- # declare -A nvmes 00:31:08.217 07:31:41 -- nvme/functions.sh@12 -- # bdfs=() 00:31:08.217 07:31:41 -- nvme/functions.sh@12 -- # declare -A bdfs 00:31:08.217 07:31:41 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:31:08.217 07:31:41 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:31:08.217 07:31:41 -- nvme/functions.sh@14 -- # nvme_name= 00:31:08.217 07:31:41 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:08.217 07:31:41 -- nvme/nvme_scc.sh@12 -- # uname 00:31:08.217 07:31:41 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:31:08.217 07:31:41 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:31:08.217 07:31:41 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:08.476 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:31:08.476 Waiting for block devices as requested 00:31:08.476 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:31:08.476 07:31:42 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:31:08.476 07:31:42 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:31:08.476 07:31:42 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:31:08.476 07:31:42 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:31:08.476 07:31:42 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:31:08.476 07:31:42 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:31:08.476 07:31:42 -- scripts/common.sh@15 -- # local i 00:31:08.476 07:31:42 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:31:08.476 07:31:42 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:31:08.476 07:31:42 -- scripts/common.sh@24 -- # return 0 00:31:08.476 07:31:42 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:31:08.476 07:31:42 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:31:08.476 07:31:42 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:31:08.476 07:31:42 -- nvme/functions.sh@18 -- # shift 00:31:08.476 07:31:42 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:31:08.476 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.476 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.476 07:31:42 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340 "' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[sn]='12340 ' 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]=""' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[fguid]= 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.739 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:31:08.739 07:31:42 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:31:08.739 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.740 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.740 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:31:08.740 07:31:42 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:31:08.741 07:31:42 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:31:08.741 07:31:42 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:31:08.741 07:31:42 -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:31:08.741 07:31:42 -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@18 -- # shift 00:31:08.741 07:31:42 -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.741 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.741 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.741 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.742 07:31:42 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:31:08.742 07:31:42 -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # IFS=: 00:31:08.742 07:31:42 -- nvme/functions.sh@21 -- # read -r reg val 00:31:08.743 07:31:42 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:31:08.743 07:31:42 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:31:08.743 07:31:42 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:31:08.743 07:31:42 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:31:08.743 07:31:42 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:31:08.743 07:31:42 -- nvme/functions.sh@65 -- # (( 1 > 0 )) 00:31:08.743 07:31:42 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:31:08.743 07:31:42 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:31:08.743 07:31:42 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:31:08.743 07:31:42 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:31:08.743 07:31:42 -- nvme/functions.sh@190 -- # (( 1 == 0 )) 00:31:08.743 07:31:42 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:31:08.743 07:31:42 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:31:08.743 07:31:42 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:31:08.743 07:31:42 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:31:08.743 07:31:42 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:31:08.743 07:31:42 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:31:08.743 07:31:42 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:31:08.743 07:31:42 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:31:08.743 07:31:42 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:31:08.743 07:31:42 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:31:08.743 07:31:42 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:31:08.743 07:31:42 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:31:08.743 07:31:42 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:31:08.743 07:31:42 -- nvme/functions.sh@76 -- # echo 0x15d 00:31:08.743 07:31:42 -- nvme/functions.sh@184 -- # oncs=0x15d 00:31:08.743 07:31:42 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:31:08.743 07:31:42 -- nvme/functions.sh@197 -- # echo nvme0 00:31:08.743 07:31:42 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:31:08.743 07:31:42 -- nvme/functions.sh@206 -- # echo nvme0 00:31:08.743 07:31:42 -- nvme/functions.sh@207 -- # return 0 00:31:08.743 07:31:42 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0 00:31:08.743 07:31:42 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:06.0 00:31:08.743 07:31:42 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:09.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:31:09.020 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:31:10.446 07:31:44 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:31:10.446 07:31:44 -- common/autotest_common.sh@1075 -- # '[' 4 -le 1 ']' 00:31:10.446 07:31:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:10.446 07:31:44 -- common/autotest_common.sh@10 -- # set +x 00:31:10.446 ************************************ 00:31:10.446 START TEST nvme_simple_copy 00:31:10.446 ************************************ 00:31:10.446 07:31:44 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:06.0' 00:31:10.705 Initializing NVMe Controllers 00:31:10.705 Attaching to 0000:00:06.0 00:31:10.705 Controller supports SCC. Attached to 0000:00:06.0 00:31:10.705 Namespace ID: 1 size: 5GB 00:31:10.705 Initialization complete. 00:31:10.705 00:31:10.705 Controller QEMU NVMe Ctrl (12340 ) 00:31:10.705 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:31:10.705 Namespace Block Size:4096 00:31:10.705 Writing LBAs 0 to 63 with Random Data 00:31:10.705 Copied LBAs from 0 - 63 to the Destination LBA 256 00:31:10.705 LBAs matching Written Data: 64 00:31:10.705 00:31:10.705 real 0m0.323s 00:31:10.705 user 0m0.143s 00:31:10.705 sys 0m0.080s 00:31:10.705 07:31:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:10.705 07:31:44 -- common/autotest_common.sh@10 -- # set +x 00:31:10.705 ************************************ 00:31:10.705 END TEST nvme_simple_copy 00:31:10.705 ************************************ 00:31:10.964 00:31:10.964 real 0m2.741s 00:31:10.964 user 0m0.714s 00:31:10.964 sys 0m1.846s 00:31:10.964 07:31:44 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:10.964 ************************************ 00:31:10.964 END TEST nvme_scc 00:31:10.964 07:31:44 -- common/autotest_common.sh@10 -- # set +x 00:31:10.964 ************************************ 00:31:10.964 07:31:44 -- spdk/autotest.sh@229 -- # [[ 0 -eq 1 ]] 00:31:10.964 07:31:44 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:31:10.964 07:31:44 -- spdk/autotest.sh@235 -- # [[ '' -eq 1 ]] 00:31:10.964 07:31:44 -- spdk/autotest.sh@238 -- # [[ 0 -eq 1 ]] 00:31:10.964 07:31:44 -- spdk/autotest.sh@242 -- # [[ '' -eq 1 ]] 00:31:10.964 07:31:44 -- spdk/autotest.sh@246 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:31:10.964 07:31:44 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:10.964 07:31:44 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:10.964 07:31:44 -- common/autotest_common.sh@10 -- # set +x 00:31:10.964 ************************************ 00:31:10.964 START TEST nvme_rpc 00:31:10.964 ************************************ 00:31:10.964 07:31:44 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:31:10.964 * Looking for test storage... 00:31:10.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:10.964 07:31:44 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:10.964 07:31:44 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:31:10.964 07:31:44 -- common/autotest_common.sh@1507 -- # bdfs=() 00:31:10.964 07:31:44 -- common/autotest_common.sh@1507 -- # local bdfs 00:31:10.964 07:31:44 -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:31:10.964 07:31:44 -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:31:10.965 07:31:44 -- common/autotest_common.sh@1496 -- # bdfs=() 00:31:10.965 07:31:44 -- common/autotest_common.sh@1496 -- # local bdfs 00:31:10.965 07:31:44 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:10.965 07:31:44 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:10.965 07:31:44 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:31:10.965 07:31:44 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:31:10.965 07:31:44 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:06.0 00:31:10.965 07:31:44 -- common/autotest_common.sh@1510 -- # echo 0000:00:06.0 00:31:10.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.965 07:31:44 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:31:10.965 07:31:44 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=146911 00:31:10.965 07:31:44 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:31:10.965 07:31:44 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:31:10.965 07:31:44 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 146911 00:31:10.965 07:31:44 -- common/autotest_common.sh@817 -- # '[' -z 146911 ']' 00:31:10.965 07:31:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.965 07:31:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:10.965 07:31:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.965 07:31:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:10.965 07:31:44 -- common/autotest_common.sh@10 -- # set +x 00:31:11.223 [2024-02-13 07:31:44.675566] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:31:11.224 [2024-02-13 07:31:44.675759] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146911 ] 00:31:11.224 [2024-02-13 07:31:44.852395] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:11.482 [2024-02-13 07:31:45.054382] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:11.482 [2024-02-13 07:31:45.054782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:11.482 [2024-02-13 07:31:45.054789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.860 07:31:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:12.860 07:31:46 -- common/autotest_common.sh@850 -- # return 0 00:31:12.860 07:31:46 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:31:12.860 Nvme0n1 00:31:12.860 07:31:46 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:31:12.860 07:31:46 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:31:13.119 request: 00:31:13.119 { 00:31:13.119 "filename": "non_existing_file", 00:31:13.119 "bdev_name": "Nvme0n1", 00:31:13.119 "method": "bdev_nvme_apply_firmware", 00:31:13.119 "req_id": 1 00:31:13.119 } 00:31:13.119 Got JSON-RPC error response 00:31:13.119 response: 00:31:13.119 { 00:31:13.119 "code": -32603, 00:31:13.119 "message": "open file failed." 00:31:13.119 } 00:31:13.119 07:31:46 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:31:13.119 07:31:46 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:31:13.119 07:31:46 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:31:13.378 07:31:46 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:31:13.378 07:31:46 -- nvme/nvme_rpc.sh@40 -- # killprocess 146911 00:31:13.378 07:31:46 -- common/autotest_common.sh@924 -- # '[' -z 146911 ']' 00:31:13.378 07:31:46 -- common/autotest_common.sh@928 -- # kill -0 146911 00:31:13.378 07:31:46 -- common/autotest_common.sh@929 -- # uname 00:31:13.378 07:31:46 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:31:13.378 07:31:46 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 146911 00:31:13.378 killing process with pid 146911 00:31:13.378 07:31:46 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:31:13.378 07:31:46 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:31:13.378 07:31:46 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 146911' 00:31:13.378 07:31:46 -- common/autotest_common.sh@943 -- # kill 146911 00:31:13.378 07:31:46 -- common/autotest_common.sh@948 -- # wait 146911 00:31:15.283 00:31:15.283 real 0m4.335s 00:31:15.283 user 0m8.157s 00:31:15.283 sys 0m0.641s 00:31:15.283 07:31:48 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:15.283 ************************************ 00:31:15.283 07:31:48 -- common/autotest_common.sh@10 -- # set +x 00:31:15.283 END TEST nvme_rpc 00:31:15.283 ************************************ 00:31:15.283 07:31:48 -- spdk/autotest.sh@247 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:31:15.283 07:31:48 -- common/autotest_common.sh@1075 -- # '[' 2 -le 1 ']' 00:31:15.283 07:31:48 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:15.283 07:31:48 -- common/autotest_common.sh@10 -- # set +x 00:31:15.283 ************************************ 00:31:15.283 START TEST nvme_rpc_timeouts 00:31:15.283 ************************************ 00:31:15.283 07:31:48 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:31:15.283 * Looking for test storage... 00:31:15.283 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:31:15.283 07:31:48 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:15.283 07:31:48 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_146991 00:31:15.283 07:31:48 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_146991 00:31:15.283 07:31:48 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=147015 00:31:15.283 07:31:48 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:31:15.283 07:31:48 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:31:15.283 07:31:48 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 147015 00:31:15.283 07:31:48 -- common/autotest_common.sh@817 -- # '[' -z 147015 ']' 00:31:15.283 07:31:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:15.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:15.283 07:31:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:15.283 07:31:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:15.283 07:31:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:15.283 07:31:48 -- common/autotest_common.sh@10 -- # set +x 00:31:15.542 [2024-02-13 07:31:49.003148] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:31:15.542 [2024-02-13 07:31:49.003558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147015 ] 00:31:15.542 [2024-02-13 07:31:49.174183] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:15.801 [2024-02-13 07:31:49.370218] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:15.801 [2024-02-13 07:31:49.370622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:15.801 [2024-02-13 07:31:49.370629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.179 07:31:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:17.179 07:31:50 -- common/autotest_common.sh@850 -- # return 0 00:31:17.179 Checking default timeout settings: 00:31:17.179 07:31:50 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:31:17.179 07:31:50 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:17.438 Making settings changes with rpc: 00:31:17.438 07:31:50 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:31:17.438 07:31:50 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:31:17.438 Check default vs. modified settings: 00:31:17.438 07:31:51 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:31:17.438 07:31:51 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_146991 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_146991 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:31:18.006 Setting action_on_timeout is changed as expected. 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_146991 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_146991 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:31:18.006 Setting timeout_us is changed as expected. 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_146991 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_146991 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:31:18.006 Setting timeout_admin_us is changed as expected. 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_146991 /tmp/settings_modified_146991 00:31:18.006 07:31:51 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 147015 00:31:18.006 07:31:51 -- common/autotest_common.sh@924 -- # '[' -z 147015 ']' 00:31:18.006 07:31:51 -- common/autotest_common.sh@928 -- # kill -0 147015 00:31:18.006 07:31:51 -- common/autotest_common.sh@929 -- # uname 00:31:18.006 07:31:51 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:31:18.006 07:31:51 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 147015 00:31:18.006 07:31:51 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:31:18.006 killing process with pid 147015 00:31:18.006 07:31:51 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:31:18.006 07:31:51 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 147015' 00:31:18.006 07:31:51 -- common/autotest_common.sh@943 -- # kill 147015 00:31:18.006 07:31:51 -- common/autotest_common.sh@948 -- # wait 147015 00:31:19.909 RPC TIMEOUT SETTING TEST PASSED. 00:31:19.909 07:31:53 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:31:19.909 00:31:19.909 real 0m4.609s 00:31:19.909 user 0m8.789s 00:31:19.909 sys 0m0.786s 00:31:19.909 07:31:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:19.909 07:31:53 -- common/autotest_common.sh@10 -- # set +x 00:31:19.909 ************************************ 00:31:19.910 END TEST nvme_rpc_timeouts 00:31:19.910 ************************************ 00:31:19.910 07:31:53 -- spdk/autotest.sh@251 -- # '[' 1 -eq 0 ']' 00:31:19.910 07:31:53 -- spdk/autotest.sh@255 -- # [[ 0 -eq 1 ]] 00:31:19.910 07:31:53 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:31:19.910 07:31:53 -- spdk/autotest.sh@268 -- # timing_exit lib 00:31:19.910 07:31:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:19.910 07:31:53 -- common/autotest_common.sh@10 -- # set +x 00:31:19.910 07:31:53 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:31:19.910 07:31:53 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:31:19.910 07:31:53 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:31:19.910 07:31:53 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:31:19.910 07:31:53 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:31:19.910 07:31:53 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:31:19.910 07:31:53 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:31:19.910 07:31:53 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:31:19.910 07:31:53 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:31:19.910 07:31:53 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:31:19.910 07:31:53 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:19.910 07:31:53 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:19.910 07:31:53 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:19.910 07:31:53 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:19.910 07:31:53 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:19.910 07:31:53 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:19.910 07:31:53 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:19.910 07:31:53 -- spdk/autotest.sh@378 -- # [[ 1 -eq 1 ]] 00:31:19.910 07:31:53 -- spdk/autotest.sh@379 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:31:19.910 07:31:53 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:31:19.910 07:31:53 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:19.910 07:31:53 -- common/autotest_common.sh@10 -- # set +x 00:31:19.910 ************************************ 00:31:19.910 START TEST blockdev_raid5f 00:31:19.910 ************************************ 00:31:19.910 07:31:53 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:31:20.169 * Looking for test storage... 00:31:20.169 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:31:20.169 07:31:53 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:31:20.169 07:31:53 -- bdev/nbd_common.sh@6 -- # set -e 00:31:20.169 07:31:53 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:31:20.169 07:31:53 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:20.169 07:31:53 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:31:20.169 07:31:53 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:31:20.169 07:31:53 -- bdev/blockdev.sh@18 -- # : 00:31:20.169 07:31:53 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:31:20.169 07:31:53 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:31:20.169 07:31:53 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:31:20.169 07:31:53 -- bdev/blockdev.sh@672 -- # uname -s 00:31:20.169 07:31:53 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:31:20.169 07:31:53 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:31:20.169 07:31:53 -- bdev/blockdev.sh@680 -- # test_type=raid5f 00:31:20.169 07:31:53 -- bdev/blockdev.sh@681 -- # crypto_device= 00:31:20.169 07:31:53 -- bdev/blockdev.sh@682 -- # dek= 00:31:20.169 07:31:53 -- bdev/blockdev.sh@683 -- # env_ctx= 00:31:20.169 07:31:53 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:31:20.169 07:31:53 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:31:20.169 07:31:53 -- bdev/blockdev.sh@688 -- # [[ raid5f == bdev ]] 00:31:20.169 07:31:53 -- bdev/blockdev.sh@688 -- # [[ raid5f == crypto_* ]] 00:31:20.169 07:31:53 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:31:20.169 07:31:53 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=147187 00:31:20.169 07:31:53 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:31:20.169 07:31:53 -- bdev/blockdev.sh@47 -- # waitforlisten 147187 00:31:20.169 07:31:53 -- common/autotest_common.sh@817 -- # '[' -z 147187 ']' 00:31:20.169 07:31:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.169 07:31:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:20.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.169 07:31:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.169 07:31:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:20.169 07:31:53 -- common/autotest_common.sh@10 -- # set +x 00:31:20.169 07:31:53 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:31:20.169 [2024-02-13 07:31:53.703896] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:31:20.169 [2024-02-13 07:31:53.704089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147187 ] 00:31:20.428 [2024-02-13 07:31:53.870703] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.428 [2024-02-13 07:31:54.049465] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:20.428 [2024-02-13 07:31:54.049751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.801 07:31:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:21.801 07:31:55 -- common/autotest_common.sh@850 -- # return 0 00:31:21.801 07:31:55 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:31:21.801 07:31:55 -- bdev/blockdev.sh@724 -- # setup_raid5f_conf 00:31:21.801 07:31:55 -- bdev/blockdev.sh@278 -- # rpc_cmd 00:31:21.801 07:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:21.801 07:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:21.801 Malloc0 00:31:21.801 Malloc1 00:31:21.801 Malloc2 00:31:21.801 07:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:21.801 07:31:55 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:31:21.801 07:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:21.801 07:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:21.801 07:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:21.801 07:31:55 -- bdev/blockdev.sh@738 -- # cat 00:31:21.801 07:31:55 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:31:21.801 07:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:21.801 07:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:21.801 07:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:21.801 07:31:55 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:31:21.801 07:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:21.801 07:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:21.801 07:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:21.801 07:31:55 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:31:21.801 07:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:21.801 07:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:21.801 07:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:21.801 07:31:55 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:31:21.801 07:31:55 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:31:21.801 07:31:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:21.801 07:31:55 -- common/autotest_common.sh@10 -- # set +x 00:31:21.801 07:31:55 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:31:21.801 07:31:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:22.059 07:31:55 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:31:22.059 07:31:55 -- bdev/blockdev.sh@747 -- # jq -r .name 00:31:22.059 07:31:55 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3f811347-e6fe-4b30-a4d2-1860b6d36d6d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3f811347-e6fe-4b30-a4d2-1860b6d36d6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3f811347-e6fe-4b30-a4d2-1860b6d36d6d",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "5aae921f-f6a9-471a-91da-533e86f90a5b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "9c0e2f4e-b391-4c59-96f9-2a6efcbc1229",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c0065c12-86c1-48fb-a629-945a95da4ca4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:31:22.059 07:31:55 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:31:22.059 07:31:55 -- bdev/blockdev.sh@750 -- # hello_world_bdev=raid5f 00:31:22.059 07:31:55 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:31:22.059 07:31:55 -- bdev/blockdev.sh@752 -- # killprocess 147187 00:31:22.059 07:31:55 -- common/autotest_common.sh@924 -- # '[' -z 147187 ']' 00:31:22.059 07:31:55 -- common/autotest_common.sh@928 -- # kill -0 147187 00:31:22.059 07:31:55 -- common/autotest_common.sh@929 -- # uname 00:31:22.059 07:31:55 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:31:22.059 07:31:55 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 147187 00:31:22.059 07:31:55 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:31:22.059 07:31:55 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:31:22.059 07:31:55 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 147187' 00:31:22.059 killing process with pid 147187 00:31:22.059 07:31:55 -- common/autotest_common.sh@943 -- # kill 147187 00:31:22.059 07:31:55 -- common/autotest_common.sh@948 -- # wait 147187 00:31:24.588 07:31:57 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:24.588 07:31:57 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:31:24.588 07:31:57 -- common/autotest_common.sh@1075 -- # '[' 7 -le 1 ']' 00:31:24.588 07:31:57 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:24.588 07:31:57 -- common/autotest_common.sh@10 -- # set +x 00:31:24.588 ************************************ 00:31:24.588 START TEST bdev_hello_world 00:31:24.588 ************************************ 00:31:24.588 07:31:57 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:31:24.588 [2024-02-13 07:31:57.790417] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:31:24.588 [2024-02-13 07:31:57.790617] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147259 ] 00:31:24.588 [2024-02-13 07:31:57.958418] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.588 [2024-02-13 07:31:58.139640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.588 [2024-02-13 07:31:58.139785] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:31:25.159 [2024-02-13 07:31:58.619855] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:31:25.159 [2024-02-13 07:31:58.619945] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:31:25.159 [2024-02-13 07:31:58.620014] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:31:25.159 [2024-02-13 07:31:58.620588] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:31:25.159 [2024-02-13 07:31:58.620792] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:31:25.159 [2024-02-13 07:31:58.620836] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:31:25.159 [2024-02-13 07:31:58.620907] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:31:25.159 00:31:25.159 [2024-02-13 07:31:58.620966] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:31:25.159 [2024-02-13 07:31:58.621031] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:31:26.151 00:31:26.151 real 0m2.098s 00:31:26.151 user 0m1.623s 00:31:26.151 sys 0m0.359s 00:31:26.151 07:31:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:26.151 07:31:59 -- common/autotest_common.sh@10 -- # set +x 00:31:26.151 ************************************ 00:31:26.151 END TEST bdev_hello_world 00:31:26.151 ************************************ 00:31:26.410 07:31:59 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:31:26.410 07:31:59 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:31:26.410 07:31:59 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:26.410 07:31:59 -- common/autotest_common.sh@10 -- # set +x 00:31:26.410 ************************************ 00:31:26.410 START TEST bdev_bounds 00:31:26.410 ************************************ 00:31:26.410 07:31:59 -- common/autotest_common.sh@1102 -- # bdev_bounds '' 00:31:26.410 07:31:59 -- bdev/blockdev.sh@288 -- # bdevio_pid=147318 00:31:26.410 07:31:59 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:31:26.410 07:31:59 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:26.410 Process bdevio pid: 147318 00:31:26.410 07:31:59 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 147318' 00:31:26.410 07:31:59 -- bdev/blockdev.sh@291 -- # waitforlisten 147318 00:31:26.410 07:31:59 -- common/autotest_common.sh@817 -- # '[' -z 147318 ']' 00:31:26.410 07:31:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:26.410 07:31:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:26.410 07:31:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:26.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:26.410 07:31:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:26.410 07:31:59 -- common/autotest_common.sh@10 -- # set +x 00:31:26.410 [2024-02-13 07:31:59.941337] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:31:26.410 [2024-02-13 07:31:59.942415] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147318 ] 00:31:26.669 [2024-02-13 07:32:00.119438] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:26.669 [2024-02-13 07:32:00.301769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:26.669 [2024-02-13 07:32:00.301914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:26.669 [2024-02-13 07:32:00.301909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:26.669 [2024-02-13 07:32:00.302620] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:31:27.237 07:32:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:27.237 07:32:00 -- common/autotest_common.sh@850 -- # return 0 00:31:27.237 07:32:00 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:31:27.496 I/O targets: 00:31:27.496 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:31:27.496 00:31:27.496 00:31:27.496 CUnit - A unit testing framework for C - Version 2.1-3 00:31:27.496 http://cunit.sourceforge.net/ 00:31:27.496 00:31:27.496 00:31:27.496 Suite: bdevio tests on: raid5f 00:31:27.496 Test: blockdev write read block ...passed 00:31:27.496 Test: blockdev write zeroes read block ...passed 00:31:27.496 Test: blockdev write zeroes read no split ...passed 00:31:27.496 Test: blockdev write zeroes read split ...passed 00:31:27.496 Test: blockdev write zeroes read split partial ...passed 00:31:27.496 Test: blockdev reset ...passed 00:31:27.496 Test: blockdev write read 8 blocks ...passed 00:31:27.496 Test: blockdev write read size > 128k ...passed 00:31:27.496 Test: blockdev write read invalid size ...passed 00:31:27.496 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:27.496 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:27.496 Test: blockdev write read max offset ...passed 00:31:27.496 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:27.496 Test: blockdev writev readv 8 blocks ...passed 00:31:27.496 Test: blockdev writev readv 30 x 1block ...passed 00:31:27.496 Test: blockdev writev readv block ...passed 00:31:27.496 Test: blockdev writev readv size > 128k ...passed 00:31:27.496 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:27.496 Test: blockdev comparev and writev ...passed 00:31:27.496 Test: blockdev nvme passthru rw ...passed 00:31:27.496 Test: blockdev nvme passthru vendor specific ...passed 00:31:27.496 Test: blockdev nvme admin passthru ...passed 00:31:27.496 Test: blockdev copy ...passed 00:31:27.496 00:31:27.496 Run Summary: Type Total Ran Passed Failed Inactive 00:31:27.496 suites 1 1 n/a 0 0 00:31:27.496 tests 23 23 23 0 0 00:31:27.496 asserts 130 130 130 0 n/a 00:31:27.496 00:31:27.496 Elapsed time = 0.417 seconds 00:31:27.496 0 00:31:27.496 07:32:01 -- bdev/blockdev.sh@293 -- # killprocess 147318 00:31:27.496 07:32:01 -- common/autotest_common.sh@924 -- # '[' -z 147318 ']' 00:31:27.496 07:32:01 -- common/autotest_common.sh@928 -- # kill -0 147318 00:31:27.496 07:32:01 -- common/autotest_common.sh@929 -- # uname 00:31:27.496 07:32:01 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:31:27.496 07:32:01 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 147318 00:31:27.496 07:32:01 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:31:27.496 killing process with pid 147318 00:31:27.496 07:32:01 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:31:27.496 07:32:01 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 147318' 00:31:27.496 07:32:01 -- common/autotest_common.sh@943 -- # kill 147318 00:31:27.496 07:32:01 -- common/autotest_common.sh@948 -- # wait 147318 00:31:27.497 [2024-02-13 07:32:01.181410] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:31:28.873 07:32:02 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:31:28.873 00:31:28.873 real 0m2.552s 00:31:28.873 user 0m6.013s 00:31:28.873 sys 0m0.407s 00:31:28.873 07:32:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:28.873 ************************************ 00:31:28.873 END TEST bdev_bounds 00:31:28.873 ************************************ 00:31:28.873 07:32:02 -- common/autotest_common.sh@10 -- # set +x 00:31:28.873 07:32:02 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:31:28.873 07:32:02 -- common/autotest_common.sh@1075 -- # '[' 5 -le 1 ']' 00:31:28.873 07:32:02 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:28.873 07:32:02 -- common/autotest_common.sh@10 -- # set +x 00:31:28.873 ************************************ 00:31:28.873 START TEST bdev_nbd 00:31:28.873 ************************************ 00:31:28.873 07:32:02 -- common/autotest_common.sh@1102 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:31:28.873 07:32:02 -- bdev/blockdev.sh@298 -- # uname -s 00:31:28.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:31:28.873 07:32:02 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:31:28.873 07:32:02 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:28.873 07:32:02 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:31:28.873 07:32:02 -- bdev/blockdev.sh@302 -- # bdev_all=($2) 00:31:28.873 07:32:02 -- bdev/blockdev.sh@302 -- # local bdev_all 00:31:28.873 07:32:02 -- bdev/blockdev.sh@303 -- # local bdev_num=1 00:31:28.873 07:32:02 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:31:28.873 07:32:02 -- bdev/blockdev.sh@309 -- # nbd_all=(/dev/nbd+([0-9])) 00:31:28.873 07:32:02 -- bdev/blockdev.sh@309 -- # local nbd_all 00:31:28.873 07:32:02 -- bdev/blockdev.sh@310 -- # bdev_num=1 00:31:28.873 07:32:02 -- bdev/blockdev.sh@312 -- # nbd_list=(${nbd_all[@]::bdev_num}) 00:31:28.873 07:32:02 -- bdev/blockdev.sh@312 -- # local nbd_list 00:31:28.873 07:32:02 -- bdev/blockdev.sh@313 -- # bdev_list=(${bdev_all[@]::bdev_num}) 00:31:28.874 07:32:02 -- bdev/blockdev.sh@313 -- # local bdev_list 00:31:28.874 07:32:02 -- bdev/blockdev.sh@316 -- # nbd_pid=147380 00:31:28.874 07:32:02 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:31:28.874 07:32:02 -- bdev/blockdev.sh@318 -- # waitforlisten 147380 /var/tmp/spdk-nbd.sock 00:31:28.874 07:32:02 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:31:28.874 07:32:02 -- common/autotest_common.sh@817 -- # '[' -z 147380 ']' 00:31:28.874 07:32:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:31:28.874 07:32:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:28.874 07:32:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:31:28.874 07:32:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:28.874 07:32:02 -- common/autotest_common.sh@10 -- # set +x 00:31:28.874 [2024-02-13 07:32:02.527663] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:31:28.874 [2024-02-13 07:32:02.527872] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:29.132 [2024-02-13 07:32:02.683127] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.390 [2024-02-13 07:32:02.891279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:29.390 [2024-02-13 07:32:02.891410] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:31:29.956 07:32:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:29.956 07:32:03 -- common/autotest_common.sh@850 -- # return 0 00:31:29.956 07:32:03 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:31:29.956 07:32:03 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:29.956 07:32:03 -- bdev/nbd_common.sh@114 -- # bdev_list=($2) 00:31:29.956 07:32:03 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:31:29.956 07:32:03 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:31:29.956 07:32:03 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:29.956 07:32:03 -- bdev/nbd_common.sh@23 -- # bdev_list=($2) 00:31:29.956 07:32:03 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:31:29.956 07:32:03 -- bdev/nbd_common.sh@24 -- # local i 00:31:29.956 07:32:03 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:31:29.956 07:32:03 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:31:29.956 07:32:03 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:29.956 07:32:03 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:31:30.215 07:32:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:31:30.215 07:32:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:31:30.215 07:32:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:31:30.215 07:32:03 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:31:30.215 07:32:03 -- common/autotest_common.sh@855 -- # local i 00:31:30.215 07:32:03 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:30.215 07:32:03 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:30.215 07:32:03 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:31:30.215 07:32:03 -- common/autotest_common.sh@859 -- # break 00:31:30.215 07:32:03 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:30.215 07:32:03 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:30.215 07:32:03 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:30.215 1+0 records in 00:31:30.215 1+0 records out 00:31:30.215 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367576 s, 11.1 MB/s 00:31:30.215 07:32:03 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:30.215 07:32:03 -- common/autotest_common.sh@872 -- # size=4096 00:31:30.215 07:32:03 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:30.215 07:32:03 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:30.215 07:32:03 -- common/autotest_common.sh@875 -- # return 0 00:31:30.215 07:32:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:31:30.215 07:32:03 -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:31:30.215 07:32:03 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:30.474 07:32:03 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:31:30.474 { 00:31:30.474 "nbd_device": "/dev/nbd0", 00:31:30.474 "bdev_name": "raid5f" 00:31:30.474 } 00:31:30.474 ]' 00:31:30.474 07:32:03 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:31:30.474 07:32:03 -- bdev/nbd_common.sh@119 -- # echo '[ 00:31:30.474 { 00:31:30.474 "nbd_device": "/dev/nbd0", 00:31:30.474 "bdev_name": "raid5f" 00:31:30.474 } 00:31:30.474 ]' 00:31:30.474 07:32:03 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:31:30.474 07:32:04 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:30.474 07:32:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:30.474 07:32:04 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:30.474 07:32:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:30.474 07:32:04 -- bdev/nbd_common.sh@51 -- # local i 00:31:30.474 07:32:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:30.474 07:32:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:30.731 07:32:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:30.731 07:32:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:30.731 07:32:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:30.731 07:32:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:30.731 07:32:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:30.731 07:32:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:30.731 07:32:04 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:31:30.731 07:32:04 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:31:30.731 07:32:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:30.731 07:32:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:30.731 07:32:04 -- bdev/nbd_common.sh@41 -- # break 00:31:30.731 07:32:04 -- bdev/nbd_common.sh@45 -- # return 0 00:31:30.731 07:32:04 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:30.731 07:32:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:30.731 07:32:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@65 -- # true 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@65 -- # count=0 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@122 -- # count=0 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@127 -- # return 0 00:31:31.296 07:32:04 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@91 -- # bdev_list=($2) 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@92 -- # nbd_list=($3) 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@10 -- # bdev_list=($2) 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@11 -- # nbd_list=($3) 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@12 -- # local i 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:31.296 07:32:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:31:31.553 /dev/nbd0 00:31:31.553 07:32:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:31.553 07:32:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:31.553 07:32:05 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:31:31.553 07:32:05 -- common/autotest_common.sh@855 -- # local i 00:31:31.553 07:32:05 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:31:31.553 07:32:05 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:31:31.553 07:32:05 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:31:31.553 07:32:05 -- common/autotest_common.sh@859 -- # break 00:31:31.553 07:32:05 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:31:31.553 07:32:05 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:31:31.553 07:32:05 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:31.553 1+0 records in 00:31:31.553 1+0 records out 00:31:31.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027198 s, 15.1 MB/s 00:31:31.553 07:32:05 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:31.553 07:32:05 -- common/autotest_common.sh@872 -- # size=4096 00:31:31.553 07:32:05 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:31.553 07:32:05 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:31:31.553 07:32:05 -- common/autotest_common.sh@875 -- # return 0 00:31:31.553 07:32:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:31.553 07:32:05 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:31.553 07:32:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:31.553 07:32:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:31.553 07:32:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:31:31.811 { 00:31:31.811 "nbd_device": "/dev/nbd0", 00:31:31.811 "bdev_name": "raid5f" 00:31:31.811 } 00:31:31.811 ]' 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:31:31.811 { 00:31:31.811 "nbd_device": "/dev/nbd0", 00:31:31.811 "bdev_name": "raid5f" 00:31:31.811 } 00:31:31.811 ]' 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@65 -- # count=1 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@66 -- # echo 1 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@95 -- # count=1 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:31:31.811 256+0 records in 00:31:31.811 256+0 records out 00:31:31.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00543188 s, 193 MB/s 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:31:31.811 256+0 records in 00:31:31.811 256+0 records out 00:31:31.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280335 s, 37.4 MB/s 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@70 -- # nbd_list=($1) 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:31:31.811 07:32:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:31:31.812 07:32:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:31:31.812 07:32:05 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:31:31.812 07:32:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:31.812 07:32:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:31.812 07:32:05 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:31.812 07:32:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:31.812 07:32:05 -- bdev/nbd_common.sh@51 -- # local i 00:31:31.812 07:32:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:31.812 07:32:05 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:32.070 07:32:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:32.070 07:32:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:32.070 07:32:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:32.070 07:32:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:32.070 07:32:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:32.070 07:32:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:32.070 07:32:05 -- bdev/nbd_common.sh@41 -- # break 00:31:32.070 07:32:05 -- bdev/nbd_common.sh@45 -- # return 0 00:31:32.070 07:32:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:31:32.070 07:32:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:32.070 07:32:05 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:31:32.327 07:32:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:31:32.327 07:32:05 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:31:32.327 07:32:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:31:32.327 07:32:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:31:32.327 07:32:05 -- bdev/nbd_common.sh@65 -- # echo '' 00:31:32.327 07:32:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:31:32.327 07:32:05 -- bdev/nbd_common.sh@65 -- # true 00:31:32.327 07:32:05 -- bdev/nbd_common.sh@65 -- # count=0 00:31:32.327 07:32:05 -- bdev/nbd_common.sh@66 -- # echo 0 00:31:32.327 07:32:05 -- bdev/nbd_common.sh@104 -- # count=0 00:31:32.327 07:32:05 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:31:32.327 07:32:05 -- bdev/nbd_common.sh@109 -- # return 0 00:31:32.327 07:32:05 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:32.327 07:32:05 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:32.327 07:32:05 -- bdev/nbd_common.sh@132 -- # nbd_list=($2) 00:31:32.327 07:32:05 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:31:32.327 07:32:05 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:31:32.327 07:32:05 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:31:32.586 malloc_lvol_verify 00:31:32.586 07:32:06 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:31:32.844 0c8ac9c0-310e-446e-b61b-599dcea6c122 00:31:32.844 07:32:06 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:31:33.102 b2fd457b-d94e-4ca2-bd63-4dca8b1c4a1e 00:31:33.102 07:32:06 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:31:33.360 /dev/nbd0 00:31:33.360 07:32:06 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:31:33.360 mke2fs 1.45.5 (07-Jan-2020) 00:31:33.360 00:31:33.360 Filesystem too small for a journal 00:31:33.360 Creating filesystem with 1024 4k blocks and 1024 inodes 00:31:33.360 00:31:33.360 Allocating group tables: 0/1 done 00:31:33.360 Writing inode tables: 0/1 done 00:31:33.361 Writing superblocks and filesystem accounting information: 0/1 done 00:31:33.361 00:31:33.361 07:32:06 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:31:33.361 07:32:06 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:31:33.361 07:32:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:31:33.361 07:32:06 -- bdev/nbd_common.sh@50 -- # nbd_list=($2) 00:31:33.361 07:32:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:33.361 07:32:06 -- bdev/nbd_common.sh@51 -- # local i 00:31:33.361 07:32:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:33.361 07:32:06 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:31:33.619 07:32:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:33.619 07:32:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:33.619 07:32:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:33.619 07:32:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:33.619 07:32:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:33.619 07:32:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:33.619 07:32:07 -- bdev/nbd_common.sh@39 -- # sleep 0.1 00:31:33.619 07:32:07 -- bdev/nbd_common.sh@37 -- # (( i++ )) 00:31:33.619 07:32:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:33.619 07:32:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:33.619 07:32:07 -- bdev/nbd_common.sh@41 -- # break 00:31:33.619 07:32:07 -- bdev/nbd_common.sh@45 -- # return 0 00:31:33.619 07:32:07 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:31:33.619 07:32:07 -- bdev/nbd_common.sh@147 -- # return 0 00:31:33.619 07:32:07 -- bdev/blockdev.sh@324 -- # killprocess 147380 00:31:33.619 07:32:07 -- common/autotest_common.sh@924 -- # '[' -z 147380 ']' 00:31:33.619 07:32:07 -- common/autotest_common.sh@928 -- # kill -0 147380 00:31:33.619 07:32:07 -- common/autotest_common.sh@929 -- # uname 00:31:33.619 07:32:07 -- common/autotest_common.sh@929 -- # '[' Linux = Linux ']' 00:31:33.619 07:32:07 -- common/autotest_common.sh@930 -- # ps --no-headers -o comm= 147380 00:31:33.619 07:32:07 -- common/autotest_common.sh@930 -- # process_name=reactor_0 00:31:33.619 07:32:07 -- common/autotest_common.sh@934 -- # '[' reactor_0 = sudo ']' 00:31:33.619 killing process with pid 147380 00:31:33.619 07:32:07 -- common/autotest_common.sh@942 -- # echo 'killing process with pid 147380' 00:31:33.619 07:32:07 -- common/autotest_common.sh@943 -- # kill 147380 00:31:33.619 07:32:07 -- common/autotest_common.sh@948 -- # wait 147380 00:31:33.619 [2024-02-13 07:32:07.208075] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:31:34.996 ************************************ 00:31:34.996 END TEST bdev_nbd 00:31:34.996 ************************************ 00:31:34.996 07:32:08 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:31:34.996 00:31:34.996 real 0m5.990s 00:31:34.996 user 0m8.584s 00:31:34.996 sys 0m1.108s 00:31:34.996 07:32:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:34.996 07:32:08 -- common/autotest_common.sh@10 -- # set +x 00:31:34.997 07:32:08 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:31:34.997 07:32:08 -- bdev/blockdev.sh@762 -- # '[' raid5f = nvme ']' 00:31:34.997 07:32:08 -- bdev/blockdev.sh@762 -- # '[' raid5f = gpt ']' 00:31:34.997 07:32:08 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:31:34.997 07:32:08 -- common/autotest_common.sh@1075 -- # '[' 3 -le 1 ']' 00:31:34.997 07:32:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:34.997 07:32:08 -- common/autotest_common.sh@10 -- # set +x 00:31:34.997 ************************************ 00:31:34.997 START TEST bdev_fio 00:31:34.997 ************************************ 00:31:34.997 07:32:08 -- common/autotest_common.sh@1102 -- # fio_test_suite '' 00:31:34.997 07:32:08 -- bdev/blockdev.sh@329 -- # local env_context 00:31:34.997 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:31:34.997 07:32:08 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:31:34.997 07:32:08 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:31:34.997 07:32:08 -- bdev/blockdev.sh@337 -- # echo '' 00:31:34.997 07:32:08 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:31:34.997 07:32:08 -- bdev/blockdev.sh@337 -- # env_context= 00:31:34.997 07:32:08 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:31:34.997 07:32:08 -- common/autotest_common.sh@1257 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:34.997 07:32:08 -- common/autotest_common.sh@1258 -- # local workload=verify 00:31:34.997 07:32:08 -- common/autotest_common.sh@1259 -- # local bdev_type=AIO 00:31:34.997 07:32:08 -- common/autotest_common.sh@1260 -- # local env_context= 00:31:34.997 07:32:08 -- common/autotest_common.sh@1261 -- # local fio_dir=/usr/src/fio 00:31:34.997 07:32:08 -- common/autotest_common.sh@1263 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:31:34.997 07:32:08 -- common/autotest_common.sh@1268 -- # '[' -z verify ']' 00:31:34.997 07:32:08 -- common/autotest_common.sh@1272 -- # '[' -n '' ']' 00:31:34.997 07:32:08 -- common/autotest_common.sh@1276 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:34.997 07:32:08 -- common/autotest_common.sh@1278 -- # cat 00:31:34.997 07:32:08 -- common/autotest_common.sh@1290 -- # '[' verify == verify ']' 00:31:34.997 07:32:08 -- common/autotest_common.sh@1291 -- # cat 00:31:34.997 07:32:08 -- common/autotest_common.sh@1300 -- # '[' AIO == AIO ']' 00:31:34.997 07:32:08 -- common/autotest_common.sh@1301 -- # /usr/src/fio/fio --version 00:31:34.997 07:32:08 -- common/autotest_common.sh@1301 -- # [[ fio-3.28 == *\f\i\o\-\3* ]] 00:31:34.997 07:32:08 -- common/autotest_common.sh@1302 -- # echo serialize_overlap=1 00:31:34.997 07:32:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:31:34.997 07:32:08 -- bdev/blockdev.sh@340 -- # echo '[job_raid5f]' 00:31:34.997 07:32:08 -- bdev/blockdev.sh@341 -- # echo filename=raid5f 00:31:34.997 07:32:08 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:31:34.997 07:32:08 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:34.997 07:32:08 -- common/autotest_common.sh@1075 -- # '[' 11 -le 1 ']' 00:31:34.997 07:32:08 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:34.997 07:32:08 -- common/autotest_common.sh@10 -- # set +x 00:31:34.997 ************************************ 00:31:34.997 START TEST bdev_fio_rw_verify 00:31:34.997 ************************************ 00:31:34.997 07:32:08 -- common/autotest_common.sh@1102 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:34.997 07:32:08 -- common/autotest_common.sh@1333 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:34.997 07:32:08 -- common/autotest_common.sh@1314 -- # local fio_dir=/usr/src/fio 00:31:34.997 07:32:08 -- common/autotest_common.sh@1316 -- # sanitizers=(libasan libclang_rt.asan) 00:31:34.997 07:32:08 -- common/autotest_common.sh@1316 -- # local sanitizers 00:31:34.997 07:32:08 -- common/autotest_common.sh@1317 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:34.997 07:32:08 -- common/autotest_common.sh@1318 -- # shift 00:31:34.997 07:32:08 -- common/autotest_common.sh@1320 -- # local asan_lib= 00:31:34.997 07:32:08 -- common/autotest_common.sh@1321 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.997 07:32:08 -- common/autotest_common.sh@1322 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:34.997 07:32:08 -- common/autotest_common.sh@1322 -- # grep libasan 00:31:34.997 07:32:08 -- common/autotest_common.sh@1322 -- # awk '{print $3}' 00:31:34.997 07:32:08 -- common/autotest_common.sh@1322 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.5 00:31:34.997 07:32:08 -- common/autotest_common.sh@1323 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.5 ]] 00:31:34.997 07:32:08 -- common/autotest_common.sh@1324 -- # break 00:31:34.997 07:32:08 -- common/autotest_common.sh@1329 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.5 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:34.997 07:32:08 -- common/autotest_common.sh@1329 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:31:35.256 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:31:35.256 fio-3.28 00:31:35.256 Starting 1 thread 00:31:47.486 00:31:47.486 job_raid5f: (groupid=0, jobs=1): err= 0: pid=147640: Tue Feb 13 07:32:19 2024 00:31:47.486 read: IOPS=869k, BW=3394MiB/s (3559MB/s)(448MiB/132msec) 00:31:47.486 slat (usec): min=18, max=124, avg=20.80, stdev= 4.90 00:31:47.486 clat (usec): min=10, max=495, avg=136.84, stdev=54.11 00:31:47.486 lat (usec): min=30, max=520, avg=157.64, stdev=55.63 00:31:47.486 clat percentiles (usec): 00:31:47.486 | 50.000th=[ 137], 99.000th=[ 289], 99.900th=[ 351], 99.990th=[ 404], 00:31:47.486 | 99.999th=[ 474] 00:31:47.486 write: IOPS=12.0k, BW=46.8MiB/s (49.1MB/s)(462MiB/9869msec); 0 zone resets 00:31:47.486 slat (usec): min=15, max=259, avg=18.49, stdev= 5.11 00:31:47.486 clat (usec): min=61, max=1244, avg=319.02, stdev=62.18 00:31:47.486 lat (usec): min=78, max=1261, avg=337.51, stdev=64.56 00:31:47.486 clat percentiles (usec): 00:31:47.486 | 50.000th=[ 314], 99.000th=[ 523], 99.900th=[ 594], 99.990th=[ 1172], 00:31:47.486 | 99.999th=[ 1237] 00:31:47.486 bw ( KiB/s): min=36680, max=54304, per=99.52%, avg=47737.68, stdev=4670.27, samples=19 00:31:47.486 iops : min= 9170, max=13576, avg=11934.42, stdev=1167.57, samples=19 00:31:47.486 lat (usec) : 20=0.01%, 50=0.01%, 100=15.86%, 250=37.64%, 500=45.66% 00:31:47.486 lat (usec) : 750=0.82%, 1000=0.01% 00:31:47.486 lat (msec) : 2=0.01% 00:31:47.486 cpu : usr=99.50%, sys=0.48%, ctx=34, majf=0, minf=8132 00:31:47.486 IO depths : 1=7.4%, 2=19.8%, 4=55.2%, 8=17.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:47.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.486 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:47.486 issued rwts: total=114694,118343,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:47.486 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:47.486 00:31:47.486 Run status group 0 (all jobs): 00:31:47.486 READ: bw=3394MiB/s (3559MB/s), 3394MiB/s-3394MiB/s (3559MB/s-3559MB/s), io=448MiB (470MB), run=132-132msec 00:31:47.486 WRITE: bw=46.8MiB/s (49.1MB/s), 46.8MiB/s-46.8MiB/s (49.1MB/s-49.1MB/s), io=462MiB (485MB), run=9869-9869msec 00:31:47.486 ----------------------------------------------------- 00:31:47.486 Suppressions used: 00:31:47.486 count bytes template 00:31:47.486 2 13 /usr/src/fio/parse.c 00:31:47.486 65 5720 /usr/src/fio/iolog.c 00:31:47.486 2 596 libcrypto.so 00:31:47.486 ----------------------------------------------------- 00:31:47.486 00:31:47.486 00:31:47.486 real 0m12.447s 00:31:47.486 user 0m12.810s 00:31:47.486 sys 0m0.666s 00:31:47.487 07:32:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:47.487 ************************************ 00:31:47.487 END TEST bdev_fio_rw_verify 00:31:47.487 ************************************ 00:31:47.487 07:32:21 -- common/autotest_common.sh@10 -- # set +x 00:31:47.487 07:32:21 -- bdev/blockdev.sh@348 -- # rm -f 00:31:47.487 07:32:21 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:47.487 07:32:21 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:31:47.487 07:32:21 -- common/autotest_common.sh@1257 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:47.487 07:32:21 -- common/autotest_common.sh@1258 -- # local workload=trim 00:31:47.487 07:32:21 -- common/autotest_common.sh@1259 -- # local bdev_type= 00:31:47.487 07:32:21 -- common/autotest_common.sh@1260 -- # local env_context= 00:31:47.487 07:32:21 -- common/autotest_common.sh@1261 -- # local fio_dir=/usr/src/fio 00:31:47.487 07:32:21 -- common/autotest_common.sh@1263 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:31:47.487 07:32:21 -- common/autotest_common.sh@1268 -- # '[' -z trim ']' 00:31:47.487 07:32:21 -- common/autotest_common.sh@1272 -- # '[' -n '' ']' 00:31:47.487 07:32:21 -- common/autotest_common.sh@1276 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:47.487 07:32:21 -- common/autotest_common.sh@1278 -- # cat 00:31:47.487 07:32:21 -- common/autotest_common.sh@1290 -- # '[' trim == verify ']' 00:31:47.487 07:32:21 -- common/autotest_common.sh@1305 -- # '[' trim == trim ']' 00:31:47.487 07:32:21 -- common/autotest_common.sh@1306 -- # echo rw=trimwrite 00:31:47.487 07:32:21 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:31:47.487 07:32:21 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "3f811347-e6fe-4b30-a4d2-1860b6d36d6d"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "3f811347-e6fe-4b30-a4d2-1860b6d36d6d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "3f811347-e6fe-4b30-a4d2-1860b6d36d6d",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "5aae921f-f6a9-471a-91da-533e86f90a5b",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "9c0e2f4e-b391-4c59-96f9-2a6efcbc1229",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "c0065c12-86c1-48fb-a629-945a95da4ca4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:31:47.487 07:32:21 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:31:47.487 07:32:21 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:31:47.487 /home/vagrant/spdk_repo/spdk 00:31:47.487 07:32:21 -- bdev/blockdev.sh@360 -- # popd 00:31:47.487 07:32:21 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:31:47.487 07:32:21 -- bdev/blockdev.sh@362 -- # return 0 00:31:47.487 00:31:47.487 real 0m12.603s 00:31:47.487 user 0m12.919s 00:31:47.487 sys 0m0.711s 00:31:47.487 ************************************ 00:31:47.487 END TEST bdev_fio 00:31:47.487 ************************************ 00:31:47.487 07:32:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:47.487 07:32:21 -- common/autotest_common.sh@10 -- # set +x 00:31:47.487 07:32:21 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:31:47.487 07:32:21 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:47.487 07:32:21 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:31:47.487 07:32:21 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:47.487 07:32:21 -- common/autotest_common.sh@10 -- # set +x 00:31:47.487 ************************************ 00:31:47.487 START TEST bdev_verify 00:31:47.487 ************************************ 00:31:47.487 07:32:21 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:31:47.746 [2024-02-13 07:32:21.219670] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:31:47.746 [2024-02-13 07:32:21.219899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147823 ] 00:31:47.746 [2024-02-13 07:32:21.377948] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:48.005 [2024-02-13 07:32:21.554608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.005 [2024-02-13 07:32:21.554601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.005 [2024-02-13 07:32:21.554884] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:31:48.573 Running I/O for 5 seconds... 00:31:53.844 00:31:53.844 Latency(us) 00:31:53.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:53.844 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:53.844 Verification LBA range: start 0x0 length 0x2000 00:31:53.844 raid5f : 5.01 11866.35 46.35 0.00 0.00 17088.98 226.21 16086.11 00:31:53.844 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:31:53.844 Verification LBA range: start 0x2000 length 0x2000 00:31:53.844 raid5f : 5.01 11866.05 46.35 0.00 0.00 17090.92 171.29 16205.27 00:31:53.844 =================================================================================================================== 00:31:53.844 Total : 23732.40 92.70 0.00 0.00 17089.95 171.29 16205.27 00:31:53.844 [2024-02-13 07:32:27.120252] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:31:54.782 00:31:54.782 real 0m7.149s 00:31:54.782 user 0m13.118s 00:31:54.782 sys 0m0.329s 00:31:54.782 07:32:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:31:54.782 ************************************ 00:31:54.782 END TEST bdev_verify 00:31:54.782 ************************************ 00:31:54.782 07:32:28 -- common/autotest_common.sh@10 -- # set +x 00:31:54.782 07:32:28 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:54.782 07:32:28 -- common/autotest_common.sh@1075 -- # '[' 16 -le 1 ']' 00:31:54.782 07:32:28 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:31:54.782 07:32:28 -- common/autotest_common.sh@10 -- # set +x 00:31:54.782 ************************************ 00:31:54.782 START TEST bdev_verify_big_io 00:31:54.782 ************************************ 00:31:54.782 07:32:28 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:31:54.782 [2024-02-13 07:32:28.409851] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:31:54.782 [2024-02-13 07:32:28.410001] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147952 ] 00:31:55.041 [2024-02-13 07:32:28.566062] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:55.300 [2024-02-13 07:32:28.751050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.300 [2024-02-13 07:32:28.751045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.300 [2024-02-13 07:32:28.751253] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:31:55.559 Running I/O for 5 seconds... 00:32:00.833 00:32:00.833 Latency(us) 00:32:00.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.834 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:32:00.834 Verification LBA range: start 0x0 length 0x200 00:32:00.834 raid5f : 5.11 845.08 52.82 0.00 0.00 3955922.38 131.26 128688.87 00:32:00.834 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:32:00.834 Verification LBA range: start 0x200 length 0x200 00:32:00.834 raid5f : 5.11 840.90 52.56 0.00 0.00 3973820.40 219.69 128688.87 00:32:00.834 =================================================================================================================== 00:32:00.834 Total : 1685.99 105.37 0.00 0.00 3964844.37 131.26 128688.87 00:32:00.834 [2024-02-13 07:32:34.372800] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:32:02.211 00:32:02.211 real 0m7.207s 00:32:02.211 user 0m13.264s 00:32:02.211 sys 0m0.288s 00:32:02.211 07:32:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:02.211 07:32:35 -- common/autotest_common.sh@10 -- # set +x 00:32:02.211 ************************************ 00:32:02.211 END TEST bdev_verify_big_io 00:32:02.211 ************************************ 00:32:02.211 07:32:35 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:02.211 07:32:35 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:32:02.211 07:32:35 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:02.211 07:32:35 -- common/autotest_common.sh@10 -- # set +x 00:32:02.211 ************************************ 00:32:02.211 START TEST bdev_write_zeroes 00:32:02.211 ************************************ 00:32:02.211 07:32:35 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:02.211 [2024-02-13 07:32:35.674781] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:32:02.211 [2024-02-13 07:32:35.674917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148073 ] 00:32:02.211 [2024-02-13 07:32:35.826059] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:02.470 [2024-02-13 07:32:35.994465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:02.470 [2024-02-13 07:32:35.994584] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:32:03.038 Running I/O for 1 seconds... 00:32:04.011 00:32:04.011 Latency(us) 00:32:04.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:04.011 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:32:04.011 raid5f : 1.00 28133.46 109.90 0.00 0.00 4534.81 1452.22 5213.09 00:32:04.012 =================================================================================================================== 00:32:04.012 Total : 28133.46 109.90 0.00 0.00 4534.81 1452.22 5213.09 00:32:04.012 [2024-02-13 07:32:37.478900] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:32:05.388 00:32:05.388 real 0m3.051s 00:32:05.388 user 0m2.636s 00:32:05.388 sys 0m0.300s 00:32:05.388 07:32:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:05.388 07:32:38 -- common/autotest_common.sh@10 -- # set +x 00:32:05.388 ************************************ 00:32:05.388 END TEST bdev_write_zeroes 00:32:05.388 ************************************ 00:32:05.388 07:32:38 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:05.388 07:32:38 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:32:05.388 07:32:38 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:05.388 07:32:38 -- common/autotest_common.sh@10 -- # set +x 00:32:05.388 ************************************ 00:32:05.388 START TEST bdev_json_nonenclosed 00:32:05.388 ************************************ 00:32:05.388 07:32:38 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:05.388 [2024-02-13 07:32:38.794996] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:32:05.388 [2024-02-13 07:32:38.796135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148130 ] 00:32:05.388 [2024-02-13 07:32:38.962727] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:05.647 [2024-02-13 07:32:39.148752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:05.647 [2024-02-13 07:32:39.148864] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:32:05.647 [2024-02-13 07:32:39.148993] json_config.c: 598:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:32:05.647 [2024-02-13 07:32:39.149026] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:32:05.647 [2024-02-13 07:32:39.149125] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:05.647 [2024-02-13 07:32:39.149177] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:32:05.906 00:32:05.906 real 0m0.754s 00:32:05.906 user 0m0.521s 00:32:05.906 sys 0m0.132s 00:32:05.906 07:32:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:05.906 07:32:39 -- common/autotest_common.sh@10 -- # set +x 00:32:05.906 ************************************ 00:32:05.906 END TEST bdev_json_nonenclosed 00:32:05.906 ************************************ 00:32:05.906 07:32:39 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:05.906 07:32:39 -- common/autotest_common.sh@1075 -- # '[' 13 -le 1 ']' 00:32:05.906 07:32:39 -- common/autotest_common.sh@1081 -- # xtrace_disable 00:32:05.906 07:32:39 -- common/autotest_common.sh@10 -- # set +x 00:32:05.906 ************************************ 00:32:05.906 START TEST bdev_json_nonarray 00:32:05.906 ************************************ 00:32:05.906 07:32:39 -- common/autotest_common.sh@1102 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:32:05.906 [2024-02-13 07:32:39.589231] Starting SPDK v24.05-pre git sha1 3bec6cb23 / DPDK 23.11.0 initialization... 00:32:05.906 [2024-02-13 07:32:39.589393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148168 ] 00:32:06.165 [2024-02-13 07:32:39.747070] app.c: 796:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.424 [2024-02-13 07:32:39.915895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.424 [2024-02-13 07:32:39.916013] json_config.c: 649:spdk_subsystem_init_from_json_config: *WARNING*: spdk_subsystem_init_from_json_config: deprecated feature spdk_subsystem_init_from_json_config is deprecated to be removed in v24.09 00:32:06.424 [2024-02-13 07:32:39.916160] json_config.c: 604:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:32:06.424 [2024-02-13 07:32:39.916194] rpc.c: 193:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:32:06.424 [2024-02-13 07:32:39.916236] app.c: 908:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:06.424 [2024-02-13 07:32:39.916280] app.c: 881:log_deprecation_hits: *WARNING*: spdk_subsystem_init_from_json_config: deprecation 'spdk_subsystem_init_from_json_config is deprecated' scheduled for removal in v24.09 hit 1 times 00:32:06.683 00:32:06.683 real 0m0.706s 00:32:06.683 user 0m0.494s 00:32:06.683 sys 0m0.112s 00:32:06.683 07:32:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:06.683 07:32:40 -- common/autotest_common.sh@10 -- # set +x 00:32:06.683 ************************************ 00:32:06.683 END TEST bdev_json_nonarray 00:32:06.683 ************************************ 00:32:06.683 07:32:40 -- bdev/blockdev.sh@785 -- # [[ raid5f == bdev ]] 00:32:06.683 07:32:40 -- bdev/blockdev.sh@792 -- # [[ raid5f == gpt ]] 00:32:06.683 07:32:40 -- bdev/blockdev.sh@796 -- # [[ raid5f == crypto_sw ]] 00:32:06.683 07:32:40 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:32:06.683 07:32:40 -- bdev/blockdev.sh@809 -- # cleanup 00:32:06.683 07:32:40 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:32:06.683 07:32:40 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:32:06.683 07:32:40 -- bdev/blockdev.sh@24 -- # [[ raid5f == rbd ]] 00:32:06.683 07:32:40 -- bdev/blockdev.sh@28 -- # [[ raid5f == daos ]] 00:32:06.683 07:32:40 -- bdev/blockdev.sh@32 -- # [[ raid5f = \g\p\t ]] 00:32:06.683 07:32:40 -- bdev/blockdev.sh@38 -- # [[ raid5f == xnvme ]] 00:32:06.683 00:32:06.683 real 0m46.733s 00:32:06.683 user 1m3.661s 00:32:06.683 sys 0m4.552s 00:32:06.683 07:32:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:32:06.683 ************************************ 00:32:06.683 END TEST blockdev_raid5f 00:32:06.683 07:32:40 -- common/autotest_common.sh@10 -- # set +x 00:32:06.683 ************************************ 00:32:06.683 07:32:40 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:32:06.683 07:32:40 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:32:06.683 07:32:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:06.683 07:32:40 -- common/autotest_common.sh@10 -- # set +x 00:32:06.683 07:32:40 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:32:06.683 07:32:40 -- common/autotest_common.sh@1369 -- # local autotest_es=0 00:32:06.683 07:32:40 -- common/autotest_common.sh@1370 -- # xtrace_disable 00:32:06.683 07:32:40 -- common/autotest_common.sh@10 -- # set +x 00:32:08.060 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:32:08.060 Waiting for block devices as requested 00:32:08.061 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:32:08.627 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda3, so not binding PCI dev 00:32:08.627 Cleaning 00:32:08.627 Removing: /var/run/dpdk/spdk0/config 00:32:08.627 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:08.627 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:08.627 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:08.627 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:08.627 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:08.627 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:08.627 Removing: /dev/shm/spdk_tgt_trace.pid106307 00:32:08.627 Removing: /var/run/dpdk/spdk0 00:32:08.627 Removing: /var/run/dpdk/spdk_pid106041 00:32:08.627 Removing: /var/run/dpdk/spdk_pid106307 00:32:08.627 Removing: /var/run/dpdk/spdk_pid106610 00:32:08.627 Removing: /var/run/dpdk/spdk_pid106878 00:32:08.628 Removing: /var/run/dpdk/spdk_pid107071 00:32:08.628 Removing: /var/run/dpdk/spdk_pid107196 00:32:08.628 Removing: /var/run/dpdk/spdk_pid107307 00:32:08.628 Removing: /var/run/dpdk/spdk_pid107447 00:32:08.628 Removing: /var/run/dpdk/spdk_pid107583 00:32:08.628 Removing: /var/run/dpdk/spdk_pid107636 00:32:08.628 Removing: /var/run/dpdk/spdk_pid107686 00:32:08.628 Removing: /var/run/dpdk/spdk_pid107755 00:32:08.628 Removing: /var/run/dpdk/spdk_pid107904 00:32:08.628 Removing: /var/run/dpdk/spdk_pid108491 00:32:08.628 Removing: /var/run/dpdk/spdk_pid108566 00:32:08.628 Removing: /var/run/dpdk/spdk_pid108673 00:32:08.628 Removing: /var/run/dpdk/spdk_pid108701 00:32:08.628 Removing: /var/run/dpdk/spdk_pid108876 00:32:08.628 Removing: /var/run/dpdk/spdk_pid108900 00:32:08.628 Removing: /var/run/dpdk/spdk_pid109075 00:32:08.628 Removing: /var/run/dpdk/spdk_pid109110 00:32:08.628 Removing: /var/run/dpdk/spdk_pid109179 00:32:08.628 Removing: /var/run/dpdk/spdk_pid109235 00:32:08.628 Removing: /var/run/dpdk/spdk_pid109304 00:32:08.628 Removing: /var/run/dpdk/spdk_pid109341 00:32:08.628 Removing: /var/run/dpdk/spdk_pid109554 00:32:08.628 Removing: /var/run/dpdk/spdk_pid109597 00:32:08.628 Removing: /var/run/dpdk/spdk_pid109646 00:32:08.628 Removing: /var/run/dpdk/spdk_pid109730 00:32:08.628 Removing: /var/run/dpdk/spdk_pid109847 00:32:08.628 Removing: /var/run/dpdk/spdk_pid109886 00:32:08.628 Removing: /var/run/dpdk/spdk_pid109981 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110022 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110089 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110130 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110189 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110223 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110299 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110331 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110385 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110431 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110495 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110541 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110588 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110629 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110697 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110736 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110790 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110824 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110889 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110928 00:32:08.628 Removing: /var/run/dpdk/spdk_pid110980 00:32:08.628 Removing: /var/run/dpdk/spdk_pid111014 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111090 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111124 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111178 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111217 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111281 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111322 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111369 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111403 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111462 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111515 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111569 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111606 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111662 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111723 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111773 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111814 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111861 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111912 00:32:08.886 Removing: /var/run/dpdk/spdk_pid111965 00:32:08.886 Removing: /var/run/dpdk/spdk_pid112049 00:32:08.886 Removing: /var/run/dpdk/spdk_pid112182 00:32:08.886 Removing: /var/run/dpdk/spdk_pid112384 00:32:08.886 Removing: /var/run/dpdk/spdk_pid112467 00:32:08.886 Removing: /var/run/dpdk/spdk_pid112543 00:32:08.886 Removing: /var/run/dpdk/spdk_pid113860 00:32:08.886 Removing: /var/run/dpdk/spdk_pid114109 00:32:08.886 Removing: /var/run/dpdk/spdk_pid114335 00:32:08.886 Removing: /var/run/dpdk/spdk_pid114487 00:32:08.886 Removing: /var/run/dpdk/spdk_pid114647 00:32:08.886 Removing: /var/run/dpdk/spdk_pid114739 00:32:08.886 Removing: /var/run/dpdk/spdk_pid114777 00:32:08.886 Removing: /var/run/dpdk/spdk_pid114815 00:32:08.886 Removing: /var/run/dpdk/spdk_pid115339 00:32:08.886 Removing: /var/run/dpdk/spdk_pid115426 00:32:08.886 Removing: /var/run/dpdk/spdk_pid115563 00:32:08.886 Removing: /var/run/dpdk/spdk_pid115626 00:32:08.886 Removing: /var/run/dpdk/spdk_pid116915 00:32:08.886 Removing: /var/run/dpdk/spdk_pid117870 00:32:08.886 Removing: /var/run/dpdk/spdk_pid118836 00:32:08.886 Removing: /var/run/dpdk/spdk_pid120040 00:32:08.886 Removing: /var/run/dpdk/spdk_pid121187 00:32:08.886 Removing: /var/run/dpdk/spdk_pid122339 00:32:08.886 Removing: /var/run/dpdk/spdk_pid123942 00:32:08.886 Removing: /var/run/dpdk/spdk_pid125242 00:32:08.886 Removing: /var/run/dpdk/spdk_pid126539 00:32:08.886 Removing: /var/run/dpdk/spdk_pid127264 00:32:08.886 Removing: /var/run/dpdk/spdk_pid127873 00:32:08.886 Removing: /var/run/dpdk/spdk_pid128538 00:32:08.886 Removing: /var/run/dpdk/spdk_pid129067 00:32:08.886 Removing: /var/run/dpdk/spdk_pid129676 00:32:08.886 Removing: /var/run/dpdk/spdk_pid130273 00:32:08.886 Removing: /var/run/dpdk/spdk_pid130973 00:32:08.886 Removing: /var/run/dpdk/spdk_pid131559 00:32:08.886 Removing: /var/run/dpdk/spdk_pid133014 00:32:08.886 Removing: /var/run/dpdk/spdk_pid133653 00:32:08.886 Removing: /var/run/dpdk/spdk_pid134240 00:32:08.886 Removing: /var/run/dpdk/spdk_pid135862 00:32:08.886 Removing: /var/run/dpdk/spdk_pid136550 00:32:08.886 Removing: /var/run/dpdk/spdk_pid137202 00:32:08.886 Removing: /var/run/dpdk/spdk_pid138031 00:32:08.886 Removing: /var/run/dpdk/spdk_pid138089 00:32:08.886 Removing: /var/run/dpdk/spdk_pid138145 00:32:08.886 Removing: /var/run/dpdk/spdk_pid138203 00:32:08.886 Removing: /var/run/dpdk/spdk_pid138345 00:32:08.886 Removing: /var/run/dpdk/spdk_pid138499 00:32:08.886 Removing: /var/run/dpdk/spdk_pid138741 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139026 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139050 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139108 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139135 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139180 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139219 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139240 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139272 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139300 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139351 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139379 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139411 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139443 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139471 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139521 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139548 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139574 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139601 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139633 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139661 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139734 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139758 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139805 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139881 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139934 00:32:08.886 Removing: /var/run/dpdk/spdk_pid139973 00:32:08.886 Removing: /var/run/dpdk/spdk_pid140017 00:32:08.886 Removing: /var/run/dpdk/spdk_pid140050 00:32:08.886 Removing: /var/run/dpdk/spdk_pid140072 00:32:08.886 Removing: /var/run/dpdk/spdk_pid140129 00:32:08.886 Removing: /var/run/dpdk/spdk_pid140160 00:32:08.886 Removing: /var/run/dpdk/spdk_pid140224 00:32:08.886 Removing: /var/run/dpdk/spdk_pid140257 00:32:08.886 Removing: /var/run/dpdk/spdk_pid140274 00:32:08.886 Removing: /var/run/dpdk/spdk_pid140302 00:32:08.886 Removing: /var/run/dpdk/spdk_pid140327 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140344 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140398 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140423 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140468 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140514 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140542 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140593 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140637 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140653 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140717 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140748 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140784 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140829 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140854 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140878 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140900 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140924 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140948 00:32:09.144 Removing: /var/run/dpdk/spdk_pid140985 00:32:09.144 Removing: /var/run/dpdk/spdk_pid141074 00:32:09.144 Removing: /var/run/dpdk/spdk_pid141178 00:32:09.144 Removing: /var/run/dpdk/spdk_pid141332 00:32:09.144 Removing: /var/run/dpdk/spdk_pid141362 00:32:09.144 Removing: /var/run/dpdk/spdk_pid141446 00:32:09.144 Removing: /var/run/dpdk/spdk_pid141504 00:32:09.144 Removing: /var/run/dpdk/spdk_pid141542 00:32:09.144 Removing: /var/run/dpdk/spdk_pid141575 00:32:09.144 Removing: /var/run/dpdk/spdk_pid141605 00:32:09.144 Removing: /var/run/dpdk/spdk_pid141675 00:32:09.144 Removing: /var/run/dpdk/spdk_pid141704 00:32:09.144 Removing: /var/run/dpdk/spdk_pid141790 00:32:09.144 Removing: /var/run/dpdk/spdk_pid141856 00:32:09.144 Removing: /var/run/dpdk/spdk_pid141907 00:32:09.144 Removing: /var/run/dpdk/spdk_pid142190 00:32:09.144 Removing: /var/run/dpdk/spdk_pid142319 00:32:09.144 Removing: /var/run/dpdk/spdk_pid142366 00:32:09.144 Removing: /var/run/dpdk/spdk_pid142461 00:32:09.144 Removing: /var/run/dpdk/spdk_pid142573 00:32:09.144 Removing: /var/run/dpdk/spdk_pid142618 00:32:09.144 Removing: /var/run/dpdk/spdk_pid142890 00:32:09.144 Removing: /var/run/dpdk/spdk_pid143093 00:32:09.144 Removing: /var/run/dpdk/spdk_pid143218 00:32:09.144 Removing: /var/run/dpdk/spdk_pid143275 00:32:09.144 Removing: /var/run/dpdk/spdk_pid143304 00:32:09.144 Removing: /var/run/dpdk/spdk_pid143380 00:32:09.144 Removing: /var/run/dpdk/spdk_pid143948 00:32:09.144 Removing: /var/run/dpdk/spdk_pid143998 00:32:09.144 Removing: /var/run/dpdk/spdk_pid144343 00:32:09.144 Removing: /var/run/dpdk/spdk_pid144480 00:32:09.144 Removing: /var/run/dpdk/spdk_pid144604 00:32:09.144 Removing: /var/run/dpdk/spdk_pid144661 00:32:09.144 Removing: /var/run/dpdk/spdk_pid144699 00:32:09.144 Removing: /var/run/dpdk/spdk_pid144738 00:32:09.144 Removing: /var/run/dpdk/spdk_pid146209 00:32:09.144 Removing: /var/run/dpdk/spdk_pid146371 00:32:09.144 Removing: /var/run/dpdk/spdk_pid146376 00:32:09.144 Removing: /var/run/dpdk/spdk_pid146393 00:32:09.144 Removing: /var/run/dpdk/spdk_pid146911 00:32:09.144 Removing: /var/run/dpdk/spdk_pid147015 00:32:09.144 Removing: /var/run/dpdk/spdk_pid147187 00:32:09.144 Removing: /var/run/dpdk/spdk_pid147259 00:32:09.144 Removing: /var/run/dpdk/spdk_pid147318 00:32:09.144 Removing: /var/run/dpdk/spdk_pid147619 00:32:09.144 Removing: /var/run/dpdk/spdk_pid147823 00:32:09.144 Removing: /var/run/dpdk/spdk_pid147952 00:32:09.144 Removing: /var/run/dpdk/spdk_pid148073 00:32:09.144 Removing: /var/run/dpdk/spdk_pid148130 00:32:09.144 Removing: /var/run/dpdk/spdk_pid148168 00:32:09.144 Clean 00:32:09.402 killing process with pid 96459 00:32:09.402 killing process with pid 96515 00:32:09.402 07:32:42 -- common/autotest_common.sh@1434 -- # return 0 00:32:09.402 07:32:42 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:32:09.402 07:32:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:09.402 07:32:42 -- common/autotest_common.sh@10 -- # set +x 00:32:09.402 07:32:42 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:32:09.402 07:32:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:09.402 07:32:42 -- common/autotest_common.sh@10 -- # set +x 00:32:09.402 07:32:42 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:09.402 07:32:42 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:09.402 07:32:42 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:09.402 07:32:42 -- spdk/autotest.sh@394 -- # hash lcov 00:32:09.402 07:32:42 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:09.402 07:32:42 -- spdk/autotest.sh@396 -- # hostname 00:32:09.402 07:32:42 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t ubuntu2004-cloud-1678329680-1737 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:09.660 geninfo: WARNING: invalid characters removed from testname! 00:32:21.860 07:32:53 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:25.146 07:32:58 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:28.441 07:33:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:30.975 07:33:04 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:34.256 07:33:07 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:36.789 07:33:10 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:40.084 07:33:13 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:40.085 07:33:13 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:40.085 07:33:13 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:32:40.085 07:33:13 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.085 07:33:13 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:32:40.085 07:33:13 -- common/autobuild_common.sh@435 -- $ date +%s 00:32:40.085 07:33:13 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1707809593.XXXXXX 00:32:40.085 07:33:13 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1707809593.lrhOng 00:32:40.085 07:33:13 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:32:40.085 07:33:13 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:32:40.085 07:33:13 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:32:40.085 07:33:13 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:32:40.085 07:33:13 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:32:40.085 07:33:13 -- common/autobuild_common.sh@451 -- $ get_config_params 00:32:40.085 07:33:13 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:32:40.085 07:33:13 -- common/autotest_common.sh@10 -- $ set +x 00:32:40.085 07:33:13 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:32:40.085 07:33:13 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:32:40.085 07:33:13 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:32:40.085 07:33:13 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:32:40.085 07:33:13 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:32:40.085 07:33:13 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:32:40.085 07:33:13 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:32:40.085 07:33:13 -- common/autotest_common.sh@710 -- $ xtrace_disable 00:32:40.085 07:33:13 -- common/autotest_common.sh@10 -- $ set +x 00:32:40.085 07:33:13 -- spdk/autopackage.sh@25 -- $ get_config_params 00:32:40.085 07:33:13 -- spdk/autopackage.sh@25 -- $ sed s/--enable-debug//g 00:32:40.085 07:33:13 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:32:40.085 07:33:13 -- common/autotest_common.sh@10 -- $ set +x 00:32:40.085 07:33:13 -- spdk/autopackage.sh@25 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f' 00:32:40.085 07:33:13 -- spdk/autopackage.sh@26 -- $ uname -s 00:32:40.085 07:33:13 -- spdk/autopackage.sh@26 -- $ '[' Linux = Linux ']' 00:32:40.085 07:33:13 -- spdk/autopackage.sh@28 -- $ [[ '' == *clang* ]] 00:32:40.085 07:33:13 -- spdk/autopackage.sh@32 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-raid5f --enable-lto 00:32:40.085 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:32:40.085 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:32:40.085 Using 'verbs' RDMA provider 00:32:52.870 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:33:05.097 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:33:05.097 Creating mk/config.mk...done. 00:33:05.097 Creating mk/cc.flags.mk...done. 00:33:05.097 Type 'make' to build. 00:33:05.097 07:33:37 -- spdk/autopackage.sh@37 -- $ make -j10 00:33:05.097 make[1]: Nothing to be done for 'all'. 00:33:08.391 The Meson build system 00:33:08.391 Version: 1.0.1 00:33:08.391 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:33:08.391 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:33:08.391 Build type: native build 00:33:08.391 Program cat found: YES (/usr/bin/cat) 00:33:08.391 Project name: DPDK 00:33:08.391 Project version: 23.11.0 00:33:08.391 C compiler for the host machine: cc (gcc 9.4.0 "cc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0") 00:33:08.391 C linker for the host machine: cc ld.bfd 2.34 00:33:08.391 Host machine cpu family: x86_64 00:33:08.391 Host machine cpu: x86_64 00:33:08.391 Message: ## Building in Developer Mode ## 00:33:08.391 Program pkg-config found: YES (/usr/bin/pkg-config) 00:33:08.391 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:33:08.391 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:33:08.391 Program python3 found: YES (/usr/bin/python3) 00:33:08.391 Program cat found: YES (/usr/bin/cat) 00:33:08.391 Compiler for C supports arguments -march=native: YES 00:33:08.391 Checking for size of "void *" : 8 00:33:08.391 Checking for size of "void *" : 8 00:33:08.391 Library m found: YES 00:33:08.391 Library numa found: YES 00:33:08.391 Has header "numaif.h" : YES 00:33:08.391 Library fdt found: NO 00:33:08.391 Library execinfo found: NO 00:33:08.391 Has header "execinfo.h" : YES 00:33:08.391 Found pkg-config: /usr/bin/pkg-config (0.29.1) 00:33:08.391 Run-time dependency libarchive found: NO (tried pkgconfig) 00:33:08.391 Run-time dependency libbsd found: NO (tried pkgconfig) 00:33:08.391 Run-time dependency jansson found: NO (tried pkgconfig) 00:33:08.391 Run-time dependency openssl found: YES 1.1.1f 00:33:08.391 Run-time dependency libpcap found: NO (tried pkgconfig) 00:33:08.391 Library pcap found: NO 00:33:08.391 Compiler for C supports arguments -Wcast-qual: YES 00:33:08.391 Compiler for C supports arguments -Wdeprecated: YES 00:33:08.391 Compiler for C supports arguments -Wformat: YES 00:33:08.391 Compiler for C supports arguments -Wformat-nonliteral: YES 00:33:08.391 Compiler for C supports arguments -Wformat-security: YES 00:33:08.391 Compiler for C supports arguments -Wmissing-declarations: YES 00:33:08.391 Compiler for C supports arguments -Wmissing-prototypes: YES 00:33:08.391 Compiler for C supports arguments -Wnested-externs: YES 00:33:08.391 Compiler for C supports arguments -Wold-style-definition: YES 00:33:08.391 Compiler for C supports arguments -Wpointer-arith: YES 00:33:08.391 Compiler for C supports arguments -Wsign-compare: YES 00:33:08.391 Compiler for C supports arguments -Wstrict-prototypes: YES 00:33:08.391 Compiler for C supports arguments -Wundef: YES 00:33:08.391 Compiler for C supports arguments -Wwrite-strings: YES 00:33:08.391 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:33:08.391 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:33:08.391 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:33:08.391 Program objdump found: YES (/usr/bin/objdump) 00:33:08.391 Compiler for C supports arguments -mavx512f: YES 00:33:08.391 Checking if "AVX512 checking" compiles: YES 00:33:08.391 Fetching value of define "__SSE4_2__" : 1 00:33:08.391 Fetching value of define "__AES__" : 1 00:33:08.391 Fetching value of define "__AVX__" : 1 00:33:08.391 Fetching value of define "__AVX2__" : 1 00:33:08.391 Fetching value of define "__AVX512BW__" : 00:33:08.391 Fetching value of define "__AVX512CD__" : 00:33:08.391 Fetching value of define "__AVX512DQ__" : 00:33:08.391 Fetching value of define "__AVX512F__" : 00:33:08.391 Fetching value of define "__AVX512VL__" : 00:33:08.391 Fetching value of define "__PCLMUL__" : 1 00:33:08.391 Fetching value of define "__RDRND__" : 1 00:33:08.391 Fetching value of define "__RDSEED__" : 1 00:33:08.391 Fetching value of define "__VPCLMULQDQ__" : 00:33:08.391 Fetching value of define "__znver1__" : 00:33:08.391 Fetching value of define "__znver2__" : 00:33:08.391 Fetching value of define "__znver3__" : 00:33:08.391 Fetching value of define "__znver4__" : 00:33:08.391 Compiler for C supports arguments -ffat-lto-objects: YES 00:33:08.391 Library asan found: YES 00:33:08.391 Compiler for C supports arguments -Wno-format-truncation: YES 00:33:08.391 Message: lib/log: Defining dependency "log" 00:33:08.391 Message: lib/kvargs: Defining dependency "kvargs" 00:33:08.391 Message: lib/telemetry: Defining dependency "telemetry" 00:33:08.391 Library rt found: YES 00:33:08.391 Checking for function "getentropy" : NO 00:33:08.391 Message: lib/eal: Defining dependency "eal" 00:33:08.391 Message: lib/ring: Defining dependency "ring" 00:33:08.391 Message: lib/rcu: Defining dependency "rcu" 00:33:08.391 Message: lib/mempool: Defining dependency "mempool" 00:33:08.391 Message: lib/mbuf: Defining dependency "mbuf" 00:33:08.391 Fetching value of define "__PCLMUL__" : 1 (cached) 00:33:08.391 Fetching value of define "__AVX512F__" : (cached) 00:33:08.391 Compiler for C supports arguments -mpclmul: YES 00:33:08.391 Compiler for C supports arguments -maes: YES 00:33:08.391 Compiler for C supports arguments -mavx512f: YES (cached) 00:33:08.391 Compiler for C supports arguments -mavx512bw: YES 00:33:08.391 Compiler for C supports arguments -mavx512dq: YES 00:33:08.391 Compiler for C supports arguments -mavx512vl: YES 00:33:08.391 Compiler for C supports arguments -mvpclmulqdq: YES 00:33:08.391 Compiler for C supports arguments -mavx2: YES 00:33:08.391 Compiler for C supports arguments -mavx: YES 00:33:08.391 Message: lib/net: Defining dependency "net" 00:33:08.391 Message: lib/meter: Defining dependency "meter" 00:33:08.391 Message: lib/ethdev: Defining dependency "ethdev" 00:33:08.391 Message: lib/pci: Defining dependency "pci" 00:33:08.391 Message: lib/cmdline: Defining dependency "cmdline" 00:33:08.391 Message: lib/hash: Defining dependency "hash" 00:33:08.391 Message: lib/timer: Defining dependency "timer" 00:33:08.391 Message: lib/compressdev: Defining dependency "compressdev" 00:33:08.391 Message: lib/cryptodev: Defining dependency "cryptodev" 00:33:08.391 Message: lib/dmadev: Defining dependency "dmadev" 00:33:08.391 Compiler for C supports arguments -Wno-cast-qual: YES 00:33:08.391 Message: lib/power: Defining dependency "power" 00:33:08.391 Message: lib/reorder: Defining dependency "reorder" 00:33:08.391 Message: lib/security: Defining dependency "security" 00:33:08.391 Has header "linux/userfaultfd.h" : YES 00:33:08.391 Has header "linux/vduse.h" : NO 00:33:08.391 Message: lib/vhost: Defining dependency "vhost" 00:33:08.391 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:33:08.391 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:33:08.391 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:33:08.391 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:33:08.391 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:33:08.391 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:33:08.391 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:33:08.391 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:33:08.391 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:33:08.391 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:33:08.391 Program doxygen found: YES (/usr/bin/doxygen) 00:33:08.391 Configuring doxy-api-html.conf using configuration 00:33:08.391 Configuring doxy-api-man.conf using configuration 00:33:08.391 Program mandb found: YES (/usr/bin/mandb) 00:33:08.391 Program sphinx-build found: NO 00:33:08.391 Configuring rte_build_config.h using configuration 00:33:08.391 Message: 00:33:08.391 ================= 00:33:08.391 Applications Enabled 00:33:08.391 ================= 00:33:08.391 00:33:08.391 apps: 00:33:08.391 00:33:08.391 00:33:08.391 Message: 00:33:08.391 ================= 00:33:08.391 Libraries Enabled 00:33:08.391 ================= 00:33:08.391 00:33:08.391 libs: 00:33:08.391 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:33:08.391 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:33:08.391 cryptodev, dmadev, power, reorder, security, vhost, 00:33:08.391 00:33:08.391 Message: 00:33:08.391 =============== 00:33:08.391 Drivers Enabled 00:33:08.391 =============== 00:33:08.391 00:33:08.391 common: 00:33:08.391 00:33:08.391 bus: 00:33:08.391 pci, vdev, 00:33:08.391 mempool: 00:33:08.391 ring, 00:33:08.391 dma: 00:33:08.391 00:33:08.391 net: 00:33:08.391 00:33:08.391 crypto: 00:33:08.391 00:33:08.391 compress: 00:33:08.391 00:33:08.391 vdpa: 00:33:08.391 00:33:08.391 00:33:08.391 Message: 00:33:08.391 ================= 00:33:08.391 Content Skipped 00:33:08.391 ================= 00:33:08.391 00:33:08.391 apps: 00:33:08.391 dumpcap: explicitly disabled via build config 00:33:08.391 graph: explicitly disabled via build config 00:33:08.391 pdump: explicitly disabled via build config 00:33:08.391 proc-info: explicitly disabled via build config 00:33:08.391 test-acl: explicitly disabled via build config 00:33:08.391 test-bbdev: explicitly disabled via build config 00:33:08.391 test-cmdline: explicitly disabled via build config 00:33:08.391 test-compress-perf: explicitly disabled via build config 00:33:08.391 test-crypto-perf: explicitly disabled via build config 00:33:08.391 test-dma-perf: explicitly disabled via build config 00:33:08.391 test-eventdev: explicitly disabled via build config 00:33:08.391 test-fib: explicitly disabled via build config 00:33:08.391 test-flow-perf: explicitly disabled via build config 00:33:08.392 test-gpudev: explicitly disabled via build config 00:33:08.392 test-mldev: explicitly disabled via build config 00:33:08.392 test-pipeline: explicitly disabled via build config 00:33:08.392 test-pmd: explicitly disabled via build config 00:33:08.392 test-regex: explicitly disabled via build config 00:33:08.392 test-sad: explicitly disabled via build config 00:33:08.392 test-security-perf: explicitly disabled via build config 00:33:08.392 00:33:08.392 libs: 00:33:08.392 metrics: explicitly disabled via build config 00:33:08.392 acl: explicitly disabled via build config 00:33:08.392 bbdev: explicitly disabled via build config 00:33:08.392 bitratestats: explicitly disabled via build config 00:33:08.392 bpf: explicitly disabled via build config 00:33:08.392 cfgfile: explicitly disabled via build config 00:33:08.392 distributor: explicitly disabled via build config 00:33:08.392 efd: explicitly disabled via build config 00:33:08.392 eventdev: explicitly disabled via build config 00:33:08.392 dispatcher: explicitly disabled via build config 00:33:08.392 gpudev: explicitly disabled via build config 00:33:08.392 gro: explicitly disabled via build config 00:33:08.392 gso: explicitly disabled via build config 00:33:08.392 ip_frag: explicitly disabled via build config 00:33:08.392 jobstats: explicitly disabled via build config 00:33:08.392 latencystats: explicitly disabled via build config 00:33:08.392 lpm: explicitly disabled via build config 00:33:08.392 member: explicitly disabled via build config 00:33:08.392 pcapng: explicitly disabled via build config 00:33:08.392 rawdev: explicitly disabled via build config 00:33:08.392 regexdev: explicitly disabled via build config 00:33:08.392 mldev: explicitly disabled via build config 00:33:08.392 rib: explicitly disabled via build config 00:33:08.392 sched: explicitly disabled via build config 00:33:08.392 stack: explicitly disabled via build config 00:33:08.392 ipsec: explicitly disabled via build config 00:33:08.392 pdcp: explicitly disabled via build config 00:33:08.392 fib: explicitly disabled via build config 00:33:08.392 port: explicitly disabled via build config 00:33:08.392 pdump: explicitly disabled via build config 00:33:08.392 table: explicitly disabled via build config 00:33:08.392 pipeline: explicitly disabled via build config 00:33:08.392 graph: explicitly disabled via build config 00:33:08.392 node: explicitly disabled via build config 00:33:08.392 00:33:08.392 drivers: 00:33:08.392 common/cpt: not in enabled drivers build config 00:33:08.392 common/dpaax: not in enabled drivers build config 00:33:08.392 common/iavf: not in enabled drivers build config 00:33:08.392 common/idpf: not in enabled drivers build config 00:33:08.392 common/mvep: not in enabled drivers build config 00:33:08.392 common/octeontx: not in enabled drivers build config 00:33:08.392 bus/auxiliary: not in enabled drivers build config 00:33:08.392 bus/cdx: not in enabled drivers build config 00:33:08.392 bus/dpaa: not in enabled drivers build config 00:33:08.392 bus/fslmc: not in enabled drivers build config 00:33:08.392 bus/ifpga: not in enabled drivers build config 00:33:08.392 bus/platform: not in enabled drivers build config 00:33:08.392 bus/vmbus: not in enabled drivers build config 00:33:08.392 common/cnxk: not in enabled drivers build config 00:33:08.392 common/mlx5: not in enabled drivers build config 00:33:08.392 common/nfp: not in enabled drivers build config 00:33:08.392 common/qat: not in enabled drivers build config 00:33:08.392 common/sfc_efx: not in enabled drivers build config 00:33:08.392 mempool/bucket: not in enabled drivers build config 00:33:08.392 mempool/cnxk: not in enabled drivers build config 00:33:08.392 mempool/dpaa: not in enabled drivers build config 00:33:08.392 mempool/dpaa2: not in enabled drivers build config 00:33:08.392 mempool/octeontx: not in enabled drivers build config 00:33:08.392 mempool/stack: not in enabled drivers build config 00:33:08.392 dma/cnxk: not in enabled drivers build config 00:33:08.392 dma/dpaa: not in enabled drivers build config 00:33:08.392 dma/dpaa2: not in enabled drivers build config 00:33:08.392 dma/hisilicon: not in enabled drivers build config 00:33:08.392 dma/idxd: not in enabled drivers build config 00:33:08.392 dma/ioat: not in enabled drivers build config 00:33:08.392 dma/skeleton: not in enabled drivers build config 00:33:08.392 net/af_packet: not in enabled drivers build config 00:33:08.392 net/af_xdp: not in enabled drivers build config 00:33:08.392 net/ark: not in enabled drivers build config 00:33:08.392 net/atlantic: not in enabled drivers build config 00:33:08.392 net/avp: not in enabled drivers build config 00:33:08.392 net/axgbe: not in enabled drivers build config 00:33:08.392 net/bnx2x: not in enabled drivers build config 00:33:08.392 net/bnxt: not in enabled drivers build config 00:33:08.392 net/bonding: not in enabled drivers build config 00:33:08.392 net/cnxk: not in enabled drivers build config 00:33:08.392 net/cpfl: not in enabled drivers build config 00:33:08.392 net/cxgbe: not in enabled drivers build config 00:33:08.392 net/dpaa: not in enabled drivers build config 00:33:08.392 net/dpaa2: not in enabled drivers build config 00:33:08.392 net/e1000: not in enabled drivers build config 00:33:08.392 net/ena: not in enabled drivers build config 00:33:08.392 net/enetc: not in enabled drivers build config 00:33:08.392 net/enetfec: not in enabled drivers build config 00:33:08.392 net/enic: not in enabled drivers build config 00:33:08.392 net/failsafe: not in enabled drivers build config 00:33:08.392 net/fm10k: not in enabled drivers build config 00:33:08.392 net/gve: not in enabled drivers build config 00:33:08.392 net/hinic: not in enabled drivers build config 00:33:08.392 net/hns3: not in enabled drivers build config 00:33:08.392 net/i40e: not in enabled drivers build config 00:33:08.392 net/iavf: not in enabled drivers build config 00:33:08.392 net/ice: not in enabled drivers build config 00:33:08.392 net/idpf: not in enabled drivers build config 00:33:08.392 net/igc: not in enabled drivers build config 00:33:08.392 net/ionic: not in enabled drivers build config 00:33:08.392 net/ipn3ke: not in enabled drivers build config 00:33:08.392 net/ixgbe: not in enabled drivers build config 00:33:08.392 net/mana: not in enabled drivers build config 00:33:08.392 net/memif: not in enabled drivers build config 00:33:08.392 net/mlx4: not in enabled drivers build config 00:33:08.392 net/mlx5: not in enabled drivers build config 00:33:08.392 net/mvneta: not in enabled drivers build config 00:33:08.392 net/mvpp2: not in enabled drivers build config 00:33:08.392 net/netvsc: not in enabled drivers build config 00:33:08.392 net/nfb: not in enabled drivers build config 00:33:08.392 net/nfp: not in enabled drivers build config 00:33:08.392 net/ngbe: not in enabled drivers build config 00:33:08.392 net/null: not in enabled drivers build config 00:33:08.392 net/octeontx: not in enabled drivers build config 00:33:08.392 net/octeon_ep: not in enabled drivers build config 00:33:08.392 net/pcap: not in enabled drivers build config 00:33:08.392 net/pfe: not in enabled drivers build config 00:33:08.392 net/qede: not in enabled drivers build config 00:33:08.392 net/ring: not in enabled drivers build config 00:33:08.392 net/sfc: not in enabled drivers build config 00:33:08.392 net/softnic: not in enabled drivers build config 00:33:08.392 net/tap: not in enabled drivers build config 00:33:08.392 net/thunderx: not in enabled drivers build config 00:33:08.392 net/txgbe: not in enabled drivers build config 00:33:08.392 net/vdev_netvsc: not in enabled drivers build config 00:33:08.392 net/vhost: not in enabled drivers build config 00:33:08.392 net/virtio: not in enabled drivers build config 00:33:08.392 net/vmxnet3: not in enabled drivers build config 00:33:08.392 raw/*: missing internal dependency, "rawdev" 00:33:08.392 crypto/armv8: not in enabled drivers build config 00:33:08.392 crypto/bcmfs: not in enabled drivers build config 00:33:08.392 crypto/caam_jr: not in enabled drivers build config 00:33:08.392 crypto/ccp: not in enabled drivers build config 00:33:08.392 crypto/cnxk: not in enabled drivers build config 00:33:08.392 crypto/dpaa_sec: not in enabled drivers build config 00:33:08.392 crypto/dpaa2_sec: not in enabled drivers build config 00:33:08.392 crypto/ipsec_mb: not in enabled drivers build config 00:33:08.392 crypto/mlx5: not in enabled drivers build config 00:33:08.392 crypto/mvsam: not in enabled drivers build config 00:33:08.392 crypto/nitrox: not in enabled drivers build config 00:33:08.392 crypto/null: not in enabled drivers build config 00:33:08.392 crypto/octeontx: not in enabled drivers build config 00:33:08.392 crypto/openssl: not in enabled drivers build config 00:33:08.392 crypto/scheduler: not in enabled drivers build config 00:33:08.392 crypto/uadk: not in enabled drivers build config 00:33:08.392 crypto/virtio: not in enabled drivers build config 00:33:08.392 compress/isal: not in enabled drivers build config 00:33:08.392 compress/mlx5: not in enabled drivers build config 00:33:08.392 compress/octeontx: not in enabled drivers build config 00:33:08.392 compress/zlib: not in enabled drivers build config 00:33:08.392 regex/*: missing internal dependency, "regexdev" 00:33:08.392 ml/*: missing internal dependency, "mldev" 00:33:08.392 vdpa/ifc: not in enabled drivers build config 00:33:08.392 vdpa/mlx5: not in enabled drivers build config 00:33:08.392 vdpa/nfp: not in enabled drivers build config 00:33:08.392 vdpa/sfc: not in enabled drivers build config 00:33:08.392 event/*: missing internal dependency, "eventdev" 00:33:08.392 baseband/*: missing internal dependency, "bbdev" 00:33:08.392 gpu/*: missing internal dependency, "gpudev" 00:33:08.392 00:33:08.392 00:33:08.960 Build targets in project: 85 00:33:08.960 00:33:08.960 DPDK 23.11.0 00:33:08.960 00:33:08.960 User defined options 00:33:08.960 default_library : static 00:33:08.960 libdir : lib 00:33:08.960 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:33:08.960 b_lto : true 00:33:08.960 b_sanitize : address 00:33:08.960 c_args : -fPIC -Werror 00:33:08.960 c_link_args : 00:33:08.960 cpu_instruction_set: native 00:33:08.960 disable_apps : test-bbdev,test,pdump,test-sad,test-fib,test-dma-perf,test-acl,test-pipeline,test-eventdev,test-regex,test-mldev,test-security-perf,graph,proc-info,test-cmdline,test-crypto-perf,test-flow-perf,test-gpudev,test-pmd,dumpcap,test-compress-perf 00:33:08.960 disable_libs : gso,eventdev,ipsec,lpm,ip_frag,pdump,latencystats,pcapng,efd,gpudev,fib,rawdev,member,node,stack,bitratestats,pipeline,graph,mldev,gro,bbdev,cfgfile,metrics,rib,port,regexdev,table,bpf,pdcp,distributor,acl,sched,jobstats,dispatcher 00:33:08.960 enable_docs : false 00:33:08.960 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:33:08.960 enable_kmods : false 00:33:08.960 tests : false 00:33:08.960 00:33:08.960 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:33:09.529 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:33:09.529 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:33:09.529 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:33:09.529 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:33:09.529 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:33:09.529 [5/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:33:09.529 [6/264] Linking static target lib/librte_kvargs.a 00:33:09.529 [7/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:33:09.787 [8/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:33:09.787 [9/264] Linking static target lib/librte_log.a 00:33:09.788 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:33:09.788 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:33:09.788 [12/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:33:09.788 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:33:09.788 [14/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:33:10.047 [15/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:33:10.047 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:33:10.047 [17/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:33:10.047 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:33:10.306 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:33:10.306 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:33:10.306 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:33:10.306 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:33:10.306 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:33:10.565 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:33:10.565 [25/264] Linking target lib/librte_log.so.24.0 00:33:10.565 [26/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:33:10.565 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:33:10.565 [28/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:33:10.824 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:33:10.824 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:33:10.824 [31/264] Linking target lib/librte_kvargs.so.24.0 00:33:10.824 [32/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:33:10.824 [33/264] Linking static target lib/librte_telemetry.a 00:33:10.824 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:33:10.824 [35/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:33:10.824 [36/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:33:10.824 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:33:10.824 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:33:11.083 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:33:11.083 [40/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:33:11.083 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:33:11.083 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:33:11.083 [43/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:33:11.342 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:33:11.342 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:33:11.601 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:33:11.601 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:33:11.601 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:33:11.601 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:33:11.601 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:33:11.601 [51/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:33:11.860 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:33:11.860 [53/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:33:11.860 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:33:11.860 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:33:11.860 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:33:11.860 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:33:11.860 [58/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:33:11.860 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:33:12.119 [60/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:33:12.119 [61/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:33:12.119 [62/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:33:12.119 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:33:12.119 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:33:12.119 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:33:12.378 [66/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:33:12.378 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:33:12.378 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:33:12.378 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:33:12.637 [70/264] Linking target lib/librte_telemetry.so.24.0 00:33:12.637 [71/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:33:12.637 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:33:12.637 [73/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:33:12.637 [74/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:33:12.637 [75/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:33:12.637 [76/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:33:12.637 [77/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:33:12.637 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:33:12.896 [79/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:33:12.896 [80/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:33:13.155 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:33:13.155 [82/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:33:13.155 [83/264] Linking static target lib/librte_ring.a 00:33:13.155 [84/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:33:13.155 [85/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:33:13.155 [86/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:33:13.155 [87/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:33:13.155 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:33:13.414 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:33:13.672 [90/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:33:13.672 [91/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:33:13.672 [92/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:33:13.672 [93/264] Linking static target lib/librte_eal.a 00:33:13.672 [94/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:33:13.672 [95/264] Linking static target lib/librte_mempool.a 00:33:13.672 [96/264] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:33:13.672 [97/264] Linking static target lib/net/libnet_crc_avx512_lib.a 00:33:13.672 [98/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:33:13.931 [99/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:33:13.931 [100/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:33:13.931 [101/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:33:13.931 [102/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:33:13.931 [103/264] Linking static target lib/librte_rcu.a 00:33:14.189 [104/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:33:14.189 [105/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:33:14.189 [106/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:33:14.189 [107/264] Linking static target lib/librte_meter.a 00:33:14.189 [108/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:33:14.189 [109/264] Linking static target lib/librte_net.a 00:33:14.189 [110/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:33:14.449 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:33:14.449 [112/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:33:14.449 [113/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:33:14.449 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:33:14.449 [115/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:33:14.708 [116/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:33:14.966 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:33:15.225 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:33:15.225 [119/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:33:15.225 [120/264] Linking static target lib/librte_mbuf.a 00:33:15.484 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:33:15.484 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:33:15.484 [123/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:33:15.743 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:33:15.743 [125/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:33:15.743 [126/264] Linking static target lib/librte_pci.a 00:33:15.743 [127/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:33:15.743 [128/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:33:15.743 [129/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:33:15.743 [130/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:33:15.743 [131/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:33:16.005 [132/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:33:16.005 [133/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:33:16.005 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:33:16.005 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:33:16.005 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:33:16.005 [137/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:33:16.005 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:33:16.315 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:33:16.315 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:33:16.315 [141/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:33:16.315 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:33:16.315 [143/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:33:16.584 [144/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:33:16.584 [145/264] Linking static target lib/librte_cmdline.a 00:33:16.584 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:33:16.584 [147/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:33:16.844 [148/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:33:16.844 [149/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:33:16.844 [150/264] Linking static target lib/librte_timer.a 00:33:17.103 [151/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:33:17.103 [152/264] Linking static target lib/librte_compressdev.a 00:33:17.103 [153/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:33:17.103 [154/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:33:17.103 [155/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:33:17.103 [156/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:33:17.362 [157/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:33:17.362 [158/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:33:17.362 [159/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:33:17.362 [160/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:33:17.621 [161/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:33:17.621 [162/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:33:17.621 [163/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:33:18.189 [164/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:33:18.189 [165/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:33:18.189 [166/264] Linking static target lib/librte_dmadev.a 00:33:18.189 [167/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:33:18.189 [168/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:33:18.189 [169/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:33:18.448 [170/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:33:18.448 [171/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:33:18.448 [172/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:33:18.448 [173/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:33:18.707 [174/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:33:18.966 [175/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:33:18.966 [176/264] Linking static target lib/librte_power.a 00:33:18.966 [177/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:33:18.966 [178/264] Linking static target lib/librte_reorder.a 00:33:18.966 [179/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:33:18.966 [180/264] Linking static target lib/librte_security.a 00:33:19.225 [181/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:33:19.225 [182/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:33:19.225 [183/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:33:19.225 [184/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:33:19.483 [185/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:33:19.483 [186/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:33:19.483 [187/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:33:20.052 [188/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:33:20.052 [189/264] Linking static target lib/librte_cryptodev.a 00:33:20.052 [190/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:33:20.052 [191/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:33:20.312 [192/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:33:20.312 [193/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:33:20.312 [194/264] Linking static target lib/librte_ethdev.a 00:33:20.312 [195/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:33:20.312 [196/264] Linking static target lib/librte_hash.a 00:33:20.571 [197/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:33:20.830 [198/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:33:20.830 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:33:21.089 [200/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:33:21.089 [201/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:33:21.089 [202/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:33:21.089 [203/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:33:21.349 [204/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:33:21.349 [205/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:33:21.608 [206/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:33:21.608 [207/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:33:21.608 [208/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:33:21.608 [209/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:33:21.608 [210/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:33:21.608 [211/264] Linking static target drivers/librte_bus_vdev.a 00:33:21.867 [212/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:33:21.867 [213/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:33:21.867 [214/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:33:21.867 [215/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:33:21.867 [216/264] Linking static target drivers/librte_bus_pci.a 00:33:21.867 [217/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:33:22.125 [218/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:33:22.125 [219/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:33:22.125 [220/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:33:22.125 [221/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:33:22.384 [222/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:33:22.384 [223/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:33:22.384 [224/264] Linking static target drivers/librte_mempool_ring.a 00:33:24.919 [225/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:33:30.192 [226/264] Linking target lib/librte_eal.so.24.0 00:33:30.192 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:33:30.192 [228/264] Linking target lib/librte_ring.so.24.0 00:33:30.192 [229/264] Linking target lib/librte_meter.so.24.0 00:33:30.192 [230/264] Linking target lib/librte_pci.so.24.0 00:33:30.192 [231/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:33:30.192 [232/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:33:30.192 [233/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:33:30.192 [234/264] Linking target drivers/librte_bus_vdev.so.24.0 00:33:30.471 [235/264] Linking target lib/librte_timer.so.24.0 00:33:30.471 [236/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:33:30.740 [237/264] Linking target lib/librte_dmadev.so.24.0 00:33:30.740 [238/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:33:30.999 [239/264] Linking target lib/librte_rcu.so.24.0 00:33:30.999 [240/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:33:30.999 [241/264] Linking target lib/librte_mempool.so.24.0 00:33:31.258 [242/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:33:31.517 [243/264] Linking target drivers/librte_bus_pci.so.24.0 00:33:31.776 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:33:32.713 [245/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:33:32.972 [246/264] Linking target lib/librte_mbuf.so.24.0 00:33:33.231 [247/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:33:33.798 [248/264] Linking target lib/librte_reorder.so.24.0 00:33:33.798 [249/264] Linking target lib/librte_compressdev.so.24.0 00:33:34.364 [250/264] Linking target lib/librte_net.so.24.0 00:33:34.364 [251/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:33:35.740 [252/264] Linking target lib/librte_cmdline.so.24.0 00:33:35.740 [253/264] Linking target lib/librte_cryptodev.so.24.0 00:33:35.740 [254/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:33:35.999 [255/264] Linking target lib/librte_security.so.24.0 00:33:38.534 [256/264] Linking target lib/librte_hash.so.24.0 00:33:38.534 [257/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:33:46.653 [258/264] Linking target lib/librte_ethdev.so.24.0 00:33:46.653 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:33:48.032 [260/264] Linking target lib/librte_power.so.24.0 00:33:54.596 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:33:54.596 [262/264] Linking static target lib/librte_vhost.a 00:33:55.163 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:34:41.888 [264/264] Linking target lib/librte_vhost.so.24.0 00:34:41.888 INFO: autodetecting backend as ninja 00:34:41.888 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:34:41.888 CC lib/ut_mock/mock.o 00:34:41.888 CC lib/ut/ut.o 00:34:41.888 CC lib/log/log.o 00:34:41.888 CC lib/log/log_flags.o 00:34:41.888 CC lib/log/log_deprecated.o 00:34:41.888 LIB libspdk_ut_mock.a 00:34:41.888 LIB libspdk_log.a 00:34:41.888 LIB libspdk_ut.a 00:34:41.888 CC lib/dma/dma.o 00:34:41.888 CC lib/util/base64.o 00:34:41.888 CC lib/util/bit_array.o 00:34:41.888 CXX lib/trace_parser/trace.o 00:34:41.888 CC lib/util/cpuset.o 00:34:41.888 CC lib/ioat/ioat.o 00:34:41.888 CC lib/util/crc32.o 00:34:41.888 CC lib/util/crc16.o 00:34:41.888 CC lib/util/crc32c.o 00:34:41.888 CC lib/vfio_user/host/vfio_user_pci.o 00:34:41.888 CC lib/vfio_user/host/vfio_user.o 00:34:41.888 CC lib/util/crc32_ieee.o 00:34:41.888 CC lib/util/crc64.o 00:34:41.888 CC lib/util/dif.o 00:34:41.888 LIB libspdk_dma.a 00:34:41.888 CC lib/util/fd.o 00:34:41.888 CC lib/util/file.o 00:34:41.888 CC lib/util/hexlify.o 00:34:41.888 LIB libspdk_ioat.a 00:34:41.888 CC lib/util/iov.o 00:34:41.888 CC lib/util/math.o 00:34:41.888 CC lib/util/pipe.o 00:34:41.888 CC lib/util/strerror_tls.o 00:34:41.888 CC lib/util/string.o 00:34:41.888 LIB libspdk_vfio_user.a 00:34:41.888 CC lib/util/uuid.o 00:34:41.888 CC lib/util/fd_group.o 00:34:41.888 CC lib/util/xor.o 00:34:41.888 CC lib/util/zipf.o 00:34:41.888 LIB libspdk_util.a 00:34:41.888 CC lib/vmd/vmd.o 00:34:41.888 CC lib/rdma/common.o 00:34:41.888 CC lib/vmd/led.o 00:34:41.888 CC lib/rdma/rdma_verbs.o 00:34:41.888 CC lib/env_dpdk/env.o 00:34:41.888 CC lib/json/json_parse.o 00:34:41.888 CC lib/json/json_util.o 00:34:41.888 CC lib/idxd/idxd.o 00:34:41.888 CC lib/conf/conf.o 00:34:42.147 LIB libspdk_trace_parser.a 00:34:42.147 CC lib/idxd/idxd_user.o 00:34:42.147 CC lib/env_dpdk/memory.o 00:34:42.147 CC lib/env_dpdk/pci.o 00:34:42.147 CC lib/json/json_write.o 00:34:42.147 LIB libspdk_conf.a 00:34:42.147 CC lib/env_dpdk/init.o 00:34:42.147 CC lib/env_dpdk/threads.o 00:34:42.147 LIB libspdk_rdma.a 00:34:42.147 CC lib/env_dpdk/pci_ioat.o 00:34:42.147 CC lib/env_dpdk/pci_virtio.o 00:34:42.405 CC lib/env_dpdk/pci_vmd.o 00:34:42.405 CC lib/env_dpdk/pci_idxd.o 00:34:42.405 CC lib/env_dpdk/pci_event.o 00:34:42.405 LIB libspdk_idxd.a 00:34:42.405 LIB libspdk_json.a 00:34:42.405 CC lib/env_dpdk/sigbus_handler.o 00:34:42.405 CC lib/env_dpdk/pci_dpdk.o 00:34:42.406 LIB libspdk_vmd.a 00:34:42.406 CC lib/env_dpdk/pci_dpdk_2207.o 00:34:42.406 CC lib/env_dpdk/pci_dpdk_2211.o 00:34:42.406 CC lib/jsonrpc/jsonrpc_server.o 00:34:42.406 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:34:42.406 CC lib/jsonrpc/jsonrpc_client.o 00:34:42.406 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:34:42.664 LIB libspdk_jsonrpc.a 00:34:42.664 CC lib/rpc/rpc.o 00:34:42.923 LIB libspdk_rpc.a 00:34:42.923 LIB libspdk_env_dpdk.a 00:34:42.923 CC lib/notify/notify_rpc.o 00:34:42.923 CC lib/notify/notify.o 00:34:42.923 CC lib/sock/sock.o 00:34:42.923 CC lib/sock/sock_rpc.o 00:34:42.923 CC lib/trace/trace_flags.o 00:34:42.923 CC lib/trace/trace.o 00:34:42.923 CC lib/trace/trace_rpc.o 00:34:43.182 LIB libspdk_notify.a 00:34:43.182 LIB libspdk_trace.a 00:34:43.182 LIB libspdk_sock.a 00:34:43.182 CC lib/thread/iobuf.o 00:34:43.182 CC lib/thread/thread.o 00:34:43.182 CC lib/nvme/nvme_ctrlr_cmd.o 00:34:43.182 CC lib/nvme/nvme_ctrlr.o 00:34:43.182 CC lib/nvme/nvme_fabric.o 00:34:43.182 CC lib/nvme/nvme_ns_cmd.o 00:34:43.182 CC lib/nvme/nvme_ns.o 00:34:43.182 CC lib/nvme/nvme_pcie_common.o 00:34:43.182 CC lib/nvme/nvme_pcie.o 00:34:43.182 CC lib/nvme/nvme_qpair.o 00:34:43.441 CC lib/nvme/nvme.o 00:34:43.700 CC lib/nvme/nvme_quirks.o 00:34:43.700 CC lib/nvme/nvme_transport.o 00:34:43.700 CC lib/nvme/nvme_discovery.o 00:34:43.959 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:34:43.959 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:34:43.959 CC lib/nvme/nvme_tcp.o 00:34:43.959 CC lib/nvme/nvme_opal.o 00:34:43.959 CC lib/nvme/nvme_io_msg.o 00:34:43.959 LIB libspdk_thread.a 00:34:43.959 CC lib/accel/accel.o 00:34:44.218 CC lib/nvme/nvme_poll_group.o 00:34:44.218 CC lib/nvme/nvme_zns.o 00:34:44.218 CC lib/nvme/nvme_cuse.o 00:34:44.218 CC lib/nvme/nvme_vfio_user.o 00:34:44.218 CC lib/nvme/nvme_rdma.o 00:34:44.218 CC lib/accel/accel_rpc.o 00:34:44.476 CC lib/accel/accel_sw.o 00:34:44.476 CC lib/blob/blobstore.o 00:34:44.476 CC lib/blob/request.o 00:34:44.735 CC lib/virtio/virtio.o 00:34:44.735 CC lib/init/json_config.o 00:34:44.735 CC lib/init/subsystem.o 00:34:44.735 CC lib/init/subsystem_rpc.o 00:34:44.735 LIB libspdk_accel.a 00:34:44.735 CC lib/init/rpc.o 00:34:44.735 CC lib/blob/zeroes.o 00:34:44.735 CC lib/blob/blob_bs_dev.o 00:34:44.735 CC lib/virtio/virtio_vhost_user.o 00:34:44.735 CC lib/virtio/virtio_vfio_user.o 00:34:44.735 CC lib/virtio/virtio_pci.o 00:34:44.735 CC lib/bdev/bdev.o 00:34:44.735 CC lib/bdev/bdev_rpc.o 00:34:44.996 LIB libspdk_init.a 00:34:44.996 CC lib/bdev/bdev_zone.o 00:34:44.996 CC lib/bdev/part.o 00:34:44.997 CC lib/bdev/scsi_nvme.o 00:34:44.997 CC lib/event/app.o 00:34:44.997 CC lib/event/reactor.o 00:34:44.997 LIB libspdk_virtio.a 00:34:44.997 CC lib/event/log_rpc.o 00:34:44.997 CC lib/event/app_rpc.o 00:34:44.997 CC lib/event/scheduler_static.o 00:34:44.997 LIB libspdk_nvme.a 00:34:45.258 LIB libspdk_event.a 00:34:46.219 LIB libspdk_blob.a 00:34:46.219 CC lib/blobfs/blobfs.o 00:34:46.219 CC lib/blobfs/tree.o 00:34:46.219 CC lib/lvol/lvol.o 00:34:46.219 LIB libspdk_bdev.a 00:34:46.478 CC lib/scsi/lun.o 00:34:46.478 CC lib/scsi/port.o 00:34:46.478 CC lib/scsi/dev.o 00:34:46.478 CC lib/scsi/scsi.o 00:34:46.478 CC lib/scsi/scsi_bdev.o 00:34:46.478 CC lib/nbd/nbd.o 00:34:46.478 CC lib/ftl/ftl_core.o 00:34:46.478 CC lib/nvmf/ctrlr.o 00:34:46.478 LIB libspdk_blobfs.a 00:34:46.478 CC lib/scsi/scsi_pr.o 00:34:46.478 CC lib/scsi/scsi_rpc.o 00:34:46.478 CC lib/ftl/ftl_init.o 00:34:46.737 LIB libspdk_lvol.a 00:34:46.737 CC lib/ftl/ftl_layout.o 00:34:46.737 CC lib/ftl/ftl_debug.o 00:34:46.737 CC lib/ftl/ftl_io.o 00:34:46.737 CC lib/ftl/ftl_sb.o 00:34:46.737 CC lib/nbd/nbd_rpc.o 00:34:46.737 CC lib/nvmf/ctrlr_discovery.o 00:34:46.737 CC lib/scsi/task.o 00:34:46.737 CC lib/nvmf/ctrlr_bdev.o 00:34:46.737 CC lib/nvmf/subsystem.o 00:34:46.737 CC lib/ftl/ftl_l2p.o 00:34:46.737 CC lib/nvmf/nvmf.o 00:34:46.737 LIB libspdk_scsi.a 00:34:46.737 CC lib/ftl/ftl_l2p_flat.o 00:34:46.737 LIB libspdk_nbd.a 00:34:46.996 CC lib/ftl/ftl_nv_cache.o 00:34:46.996 CC lib/ftl/ftl_band.o 00:34:46.996 CC lib/ftl/ftl_band_ops.o 00:34:46.996 CC lib/iscsi/conn.o 00:34:46.996 CC lib/nvmf/nvmf_rpc.o 00:34:46.996 CC lib/nvmf/transport.o 00:34:46.996 CC lib/nvmf/tcp.o 00:34:46.996 CC lib/nvmf/rdma.o 00:34:47.254 CC lib/vhost/vhost.o 00:34:47.254 CC lib/vhost/vhost_rpc.o 00:34:47.254 CC lib/vhost/vhost_scsi.o 00:34:47.254 CC lib/vhost/vhost_blk.o 00:34:47.254 CC lib/iscsi/init_grp.o 00:34:47.254 CC lib/iscsi/iscsi.o 00:34:47.512 CC lib/iscsi/md5.o 00:34:47.512 CC lib/ftl/ftl_writer.o 00:34:47.512 CC lib/vhost/rte_vhost_user.o 00:34:47.512 CC lib/iscsi/param.o 00:34:47.513 CC lib/ftl/ftl_rq.o 00:34:47.770 CC lib/ftl/ftl_reloc.o 00:34:47.770 CC lib/ftl/ftl_l2p_cache.o 00:34:47.770 CC lib/iscsi/portal_grp.o 00:34:47.770 CC lib/iscsi/tgt_node.o 00:34:47.770 CC lib/iscsi/iscsi_subsystem.o 00:34:47.770 CC lib/ftl/ftl_p2l.o 00:34:47.770 CC lib/ftl/ftl_trace.o 00:34:47.770 CC lib/iscsi/iscsi_rpc.o 00:34:48.028 CC lib/iscsi/task.o 00:34:48.028 CC lib/ftl/mngt/ftl_mngt.o 00:34:48.028 LIB libspdk_nvmf.a 00:34:48.028 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:34:48.028 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:34:48.028 CC lib/ftl/mngt/ftl_mngt_startup.o 00:34:48.028 CC lib/ftl/mngt/ftl_mngt_md.o 00:34:48.028 CC lib/ftl/mngt/ftl_mngt_misc.o 00:34:48.028 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:34:48.028 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:34:48.028 LIB libspdk_iscsi.a 00:34:48.287 CC lib/ftl/mngt/ftl_mngt_band.o 00:34:48.287 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:34:48.287 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:34:48.287 LIB libspdk_vhost.a 00:34:48.287 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:34:48.287 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:34:48.287 CC lib/ftl/utils/ftl_conf.o 00:34:48.287 CC lib/ftl/utils/ftl_md.o 00:34:48.287 CC lib/ftl/utils/ftl_mempool.o 00:34:48.287 CC lib/ftl/utils/ftl_bitmap.o 00:34:48.287 CC lib/ftl/utils/ftl_property.o 00:34:48.287 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:34:48.287 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:34:48.287 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:34:48.287 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:34:48.287 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:34:48.287 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:34:48.287 CC lib/ftl/upgrade/ftl_sb_v3.o 00:34:48.545 CC lib/ftl/upgrade/ftl_sb_v5.o 00:34:48.545 CC lib/ftl/nvc/ftl_nvc_dev.o 00:34:48.545 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:34:48.545 CC lib/ftl/base/ftl_base_dev.o 00:34:48.545 CC lib/ftl/base/ftl_base_bdev.o 00:34:48.804 LIB libspdk_ftl.a 00:34:48.804 CC module/env_dpdk/env_dpdk_rpc.o 00:34:49.063 CC module/accel/ioat/accel_ioat.o 00:34:49.063 CC module/sock/posix/posix.o 00:34:49.063 CC module/scheduler/dynamic/scheduler_dynamic.o 00:34:49.063 CC module/accel/dsa/accel_dsa.o 00:34:49.063 CC module/accel/iaa/accel_iaa.o 00:34:49.063 CC module/scheduler/gscheduler/gscheduler.o 00:34:49.063 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:34:49.063 CC module/accel/error/accel_error.o 00:34:49.063 CC module/blob/bdev/blob_bdev.o 00:34:49.063 LIB libspdk_env_dpdk_rpc.a 00:34:49.063 CC module/accel/error/accel_error_rpc.o 00:34:49.063 LIB libspdk_scheduler_dpdk_governor.a 00:34:49.063 CC module/accel/ioat/accel_ioat_rpc.o 00:34:49.063 LIB libspdk_scheduler_dynamic.a 00:34:49.063 LIB libspdk_scheduler_gscheduler.a 00:34:49.063 CC module/accel/iaa/accel_iaa_rpc.o 00:34:49.063 CC module/accel/dsa/accel_dsa_rpc.o 00:34:49.063 LIB libspdk_accel_error.a 00:34:49.063 LIB libspdk_blob_bdev.a 00:34:49.063 LIB libspdk_accel_ioat.a 00:34:49.063 LIB libspdk_accel_iaa.a 00:34:49.322 LIB libspdk_accel_dsa.a 00:34:49.322 CC module/blobfs/bdev/blobfs_bdev.o 00:34:49.322 CC module/bdev/gpt/gpt.o 00:34:49.322 CC module/bdev/malloc/bdev_malloc.o 00:34:49.322 CC module/bdev/error/vbdev_error.o 00:34:49.322 CC module/bdev/delay/vbdev_delay.o 00:34:49.322 CC module/bdev/lvol/vbdev_lvol.o 00:34:49.322 CC module/bdev/null/bdev_null.o 00:34:49.322 CC module/bdev/nvme/bdev_nvme.o 00:34:49.322 CC module/bdev/passthru/vbdev_passthru.o 00:34:49.322 LIB libspdk_sock_posix.a 00:34:49.322 CC module/bdev/error/vbdev_error_rpc.o 00:34:49.322 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:34:49.322 CC module/bdev/gpt/vbdev_gpt.o 00:34:49.581 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:34:49.581 CC module/bdev/null/bdev_null_rpc.o 00:34:49.581 LIB libspdk_bdev_error.a 00:34:49.581 CC module/bdev/delay/vbdev_delay_rpc.o 00:34:49.581 CC module/bdev/malloc/bdev_malloc_rpc.o 00:34:49.581 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:34:49.581 LIB libspdk_blobfs_bdev.a 00:34:49.581 CC module/bdev/raid/bdev_raid.o 00:34:49.581 LIB libspdk_bdev_null.a 00:34:49.581 CC module/bdev/split/vbdev_split.o 00:34:49.581 LIB libspdk_bdev_gpt.a 00:34:49.581 LIB libspdk_bdev_delay.a 00:34:49.581 LIB libspdk_bdev_malloc.a 00:34:49.581 LIB libspdk_bdev_passthru.a 00:34:49.581 CC module/bdev/zone_block/vbdev_zone_block.o 00:34:49.581 CC module/bdev/nvme/bdev_nvme_rpc.o 00:34:49.581 CC module/bdev/aio/bdev_aio.o 00:34:49.581 LIB libspdk_bdev_lvol.a 00:34:49.581 CC module/bdev/ftl/bdev_ftl.o 00:34:49.839 CC module/bdev/iscsi/bdev_iscsi.o 00:34:49.839 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:34:49.839 CC module/bdev/virtio/bdev_virtio_scsi.o 00:34:49.839 CC module/bdev/split/vbdev_split_rpc.o 00:34:49.839 CC module/bdev/virtio/bdev_virtio_blk.o 00:34:49.839 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:34:49.839 CC module/bdev/ftl/bdev_ftl_rpc.o 00:34:49.839 LIB libspdk_bdev_split.a 00:34:49.839 CC module/bdev/virtio/bdev_virtio_rpc.o 00:34:49.839 CC module/bdev/aio/bdev_aio_rpc.o 00:34:50.097 LIB libspdk_bdev_iscsi.a 00:34:50.097 CC module/bdev/nvme/nvme_rpc.o 00:34:50.097 LIB libspdk_bdev_zone_block.a 00:34:50.097 CC module/bdev/nvme/bdev_mdns_client.o 00:34:50.098 CC module/bdev/raid/bdev_raid_rpc.o 00:34:50.098 CC module/bdev/raid/bdev_raid_sb.o 00:34:50.098 LIB libspdk_bdev_ftl.a 00:34:50.098 CC module/bdev/nvme/vbdev_opal.o 00:34:50.098 CC module/bdev/nvme/vbdev_opal_rpc.o 00:34:50.098 CC module/bdev/raid/raid0.o 00:34:50.098 LIB libspdk_bdev_aio.a 00:34:50.098 LIB libspdk_bdev_virtio.a 00:34:50.098 CC module/bdev/raid/raid1.o 00:34:50.098 CC module/bdev/raid/concat.o 00:34:50.098 CC module/bdev/raid/raid5f.o 00:34:50.098 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:34:50.355 LIB libspdk_bdev_raid.a 00:34:50.613 LIB libspdk_bdev_nvme.a 00:34:50.872 CC module/event/subsystems/vmd/vmd_rpc.o 00:34:50.872 CC module/event/subsystems/vmd/vmd.o 00:34:50.872 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:34:50.872 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:34:50.872 CC module/event/subsystems/iobuf/iobuf.o 00:34:50.872 CC module/event/subsystems/scheduler/scheduler.o 00:34:50.872 CC module/event/subsystems/sock/sock.o 00:34:50.872 LIB libspdk_event_vhost_blk.a 00:34:50.872 LIB libspdk_event_vmd.a 00:34:50.872 LIB libspdk_event_scheduler.a 00:34:50.872 LIB libspdk_event_iobuf.a 00:34:50.872 LIB libspdk_event_sock.a 00:34:50.872 CC module/event/subsystems/accel/accel.o 00:34:51.130 LIB libspdk_event_accel.a 00:34:51.130 CC module/event/subsystems/bdev/bdev.o 00:34:51.388 LIB libspdk_event_bdev.a 00:34:51.388 CC module/event/subsystems/nbd/nbd.o 00:34:51.647 CC module/event/subsystems/scsi/scsi.o 00:34:51.647 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:34:51.647 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:34:51.647 LIB libspdk_event_nbd.a 00:34:51.647 LIB libspdk_event_scsi.a 00:34:51.647 LIB libspdk_event_nvmf.a 00:34:51.647 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:34:51.647 CC module/event/subsystems/iscsi/iscsi.o 00:34:51.905 LIB libspdk_event_vhost_scsi.a 00:34:51.905 LIB libspdk_event_iscsi.a 00:34:52.164 TEST_HEADER include/spdk/config.h 00:34:52.164 CXX test/cpp_headers/rpc.o 00:34:52.164 CXX app/trace/trace.o 00:34:52.164 CC app/trace_record/trace_record.o 00:34:52.164 CC examples/accel/perf/accel_perf.o 00:34:52.164 CC test/dma/test_dma/test_dma.o 00:34:52.164 CC test/bdev/bdevio/bdevio.o 00:34:52.164 CC test/accel/dif/dif.o 00:34:52.164 CC test/blobfs/mkfs/mkfs.o 00:34:52.164 CC examples/bdev/hello_world/hello_bdev.o 00:34:52.164 CC test/app/bdev_svc/bdev_svc.o 00:34:52.164 CXX test/cpp_headers/accel_module.o 00:34:52.164 LINK spdk_trace_record 00:34:52.422 LINK bdev_svc 00:34:52.422 LINK mkfs 00:34:52.422 LINK hello_bdev 00:34:52.422 CXX test/cpp_headers/bit_pool.o 00:34:52.422 LINK bdevio 00:34:52.423 LINK accel_perf 00:34:52.423 LINK dif 00:34:52.423 LINK test_dma 00:34:52.423 CXX test/cpp_headers/nvmf.o 00:34:52.681 LINK spdk_trace 00:34:52.681 CXX test/cpp_headers/blobfs.o 00:34:52.938 CXX test/cpp_headers/notify.o 00:34:53.196 CXX test/cpp_headers/pipe.o 00:34:53.454 CXX test/cpp_headers/accel.o 00:34:54.021 CXX test/cpp_headers/mmio.o 00:34:54.587 CXX test/cpp_headers/version.o 00:34:54.846 CXX test/cpp_headers/trace_parser.o 00:34:55.105 CXX test/cpp_headers/opal_spec.o 00:34:55.672 CXX test/cpp_headers/uuid.o 00:34:56.237 CXX test/cpp_headers/fd.o 00:34:56.820 CXX test/cpp_headers/likely.o 00:34:57.406 CXX test/cpp_headers/memory.o 00:34:57.973 CXX test/cpp_headers/vfio_user_pci.o 00:34:58.910 CXX test/cpp_headers/dma.o 00:34:59.169 CC examples/bdev/bdevperf/bdevperf.o 00:34:59.737 CXX test/cpp_headers/bit_array.o 00:35:01.114 CXX test/cpp_headers/nbd.o 00:35:01.114 CXX test/cpp_headers/bdev.o 00:35:03.020 CXX test/cpp_headers/nvme_zns.o 00:35:03.020 LINK bdevperf 00:35:04.924 CXX test/cpp_headers/bdev_module.o 00:35:06.829 CXX test/cpp_headers/env_dpdk.o 00:35:08.206 CXX test/cpp_headers/nvmf_spec.o 00:35:09.581 CXX test/cpp_headers/fd_group.o 00:35:10.516 CXX test/cpp_headers/json.o 00:35:10.516 CC app/nvmf_tgt/nvmf_main.o 00:35:11.918 CXX test/cpp_headers/zipf.o 00:35:11.918 LINK nvmf_tgt 00:35:12.855 CXX test/cpp_headers/nvmf_fc_spec.o 00:35:14.234 CXX test/cpp_headers/base64.o 00:35:15.612 CXX test/cpp_headers/gpt_spec.o 00:35:16.991 CXX test/cpp_headers/blobfs_bdev.o 00:35:18.369 CXX test/cpp_headers/config.o 00:35:18.628 CXX test/cpp_headers/crc32.o 00:35:20.003 CXX test/cpp_headers/barrier.o 00:35:21.378 CXX test/cpp_headers/scsi_spec.o 00:35:22.756 CXX test/cpp_headers/hexlify.o 00:35:24.135 CXX test/cpp_headers/blob.o 00:35:26.049 CXX test/cpp_headers/cpuset.o 00:35:27.436 CXX test/cpp_headers/thread.o 00:35:28.814 CXX test/cpp_headers/opal.o 00:35:30.712 CXX test/cpp_headers/blob_bdev.o 00:35:32.615 CXX test/cpp_headers/xor.o 00:35:33.993 CXX test/cpp_headers/assert.o 00:35:35.371 CXX test/cpp_headers/nvme_spec.o 00:35:36.747 CXX test/cpp_headers/endian.o 00:35:38.125 CXX test/cpp_headers/tree.o 00:35:38.125 CXX test/cpp_headers/util.o 00:35:39.063 CXX test/cpp_headers/log.o 00:35:39.322 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:35:40.697 CXX test/cpp_headers/sock.o 00:35:42.071 LINK nvme_fuzz 00:35:42.071 CXX test/cpp_headers/nvme_ocssd_spec.o 00:35:43.974 CXX test/cpp_headers/ftl.o 00:35:45.885 CXX test/cpp_headers/vhost.o 00:35:47.262 CXX test/cpp_headers/crc64.o 00:35:48.640 CXX test/cpp_headers/nvme_intel.o 00:35:50.541 CXX test/cpp_headers/idxd_spec.o 00:35:51.916 CXX test/cpp_headers/crc16.o 00:35:53.293 CXX test/cpp_headers/bdev_zone.o 00:35:54.668 CXX test/cpp_headers/stdinc.o 00:35:55.604 CXX test/cpp_headers/scsi.o 00:35:57.504 CXX test/cpp_headers/trace.o 00:35:59.405 CXX test/cpp_headers/file.o 00:36:00.780 CXX test/cpp_headers/reduce.o 00:36:02.157 CXX test/cpp_headers/event.o 00:36:04.061 CXX test/cpp_headers/init.o 00:36:05.965 CXX test/cpp_headers/nvmf_transport.o 00:36:07.869 CXX test/cpp_headers/idxd.o 00:36:09.774 CXX test/cpp_headers/vfio_user_spec.o 00:36:11.677 CXX test/cpp_headers/nvme.o 00:36:13.634 CXX test/cpp_headers/iscsi_spec.o 00:36:15.012 CXX test/cpp_headers/queue.o 00:36:15.580 CXX test/cpp_headers/nvmf_cmd.o 00:36:18.114 CXX test/cpp_headers/lvol.o 00:36:19.491 CXX test/cpp_headers/histogram_data.o 00:36:21.394 CXX test/cpp_headers/env.o 00:36:22.773 CXX test/cpp_headers/ioat_spec.o 00:36:24.677 CXX test/cpp_headers/conf.o 00:36:26.053 CXX test/cpp_headers/ublk.o 00:36:27.955 CXX test/cpp_headers/dif.o 00:36:29.855 CXX test/cpp_headers/pci_ids.o 00:36:31.866 CXX test/cpp_headers/scheduler.o 00:36:33.244 CXX test/cpp_headers/string.o 00:36:35.145 CXX test/cpp_headers/jsonrpc.o 00:36:37.045 CXX test/cpp_headers/nvme_ocssd.o 00:36:38.415 CXX test/cpp_headers/vmd.o 00:36:40.318 CXX test/cpp_headers/ioat.o 00:36:42.851 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:36:52.828 LINK iscsi_fuzz 00:37:19.367 CC examples/blob/hello_world/hello_blob.o 00:37:19.367 LINK hello_blob 00:37:51.489 CC examples/blob/cli/blobcli.o 00:37:51.489 CC test/app/histogram_perf/histogram_perf.o 00:37:51.489 CC test/app/jsoncat/jsoncat.o 00:37:51.489 LINK histogram_perf 00:37:51.489 CC test/app/stub/stub.o 00:37:51.489 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:37:51.489 LINK blobcli 00:37:51.489 LINK jsoncat 00:37:51.748 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:37:52.007 LINK stub 00:37:52.007 CC examples/ioat/perf/perf.o 00:37:53.383 LINK ioat_perf 00:37:53.383 LINK vhost_fuzz 00:38:03.358 CC examples/nvme/hello_world/hello_world.o 00:38:05.262 LINK hello_world 00:38:20.140 CC examples/nvme/reconnect/reconnect.o 00:38:20.140 CC examples/nvme/nvme_manage/nvme_manage.o 00:38:20.401 LINK reconnect 00:38:22.940 LINK nvme_manage 00:38:23.513 CC examples/nvme/arbitration/arbitration.o 00:38:26.058 LINK arbitration 00:38:29.412 CC examples/ioat/verify/verify.o 00:38:30.346 LINK verify 00:38:48.441 CC examples/nvme/hotplug/hotplug.o 00:38:49.827 LINK hotplug 00:39:02.042 CC app/iscsi_tgt/iscsi_tgt.o 00:39:03.419 LINK iscsi_tgt 00:39:05.323 CC test/env/mem_callbacks/mem_callbacks.o 00:39:07.857 LINK mem_callbacks 00:39:12.048 CC test/event/event_perf/event_perf.o 00:39:12.307 CC test/event/reactor/reactor.o 00:39:12.874 LINK event_perf 00:39:13.133 LINK reactor 00:39:31.220 CC test/event/reactor_perf/reactor_perf.o 00:39:31.220 LINK reactor_perf 00:39:32.157 CC test/event/app_repeat/app_repeat.o 00:39:33.095 LINK app_repeat 00:39:37.295 CC test/event/scheduler/scheduler.o 00:39:38.672 LINK scheduler 00:39:50.878 CC test/env/vtophys/vtophys.o 00:39:50.878 LINK vtophys 00:39:55.071 CC examples/nvme/cmb_copy/cmb_copy.o 00:39:56.453 LINK cmb_copy 00:40:03.015 CC examples/nvme/abort/abort.o 00:40:03.583 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:40:04.961 LINK pmr_persistence 00:40:05.220 LINK abort 00:40:05.220 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:40:06.598 LINK env_dpdk_post_init 00:40:18.804 CC test/env/memory/memory_ut.o 00:40:22.095 CC app/spdk_tgt/spdk_tgt.o 00:40:23.472 LINK spdk_tgt 00:40:23.731 LINK memory_ut 00:40:38.615 CC test/env/pci/pci_ut.o 00:40:39.182 LINK pci_ut 00:40:41.085 CC test/lvol/esnap/esnap.o 00:40:49.202 CC test/nvme/aer/aer.o 00:40:50.580 LINK aer 00:40:54.840 CC test/nvme/reset/reset.o 00:40:56.745 LINK reset 00:41:00.934 LINK esnap 00:41:03.479 CC app/spdk_lspci/spdk_lspci.o 00:41:04.047 LINK spdk_lspci 00:41:04.306 CC test/rpc_client/rpc_client_test.o 00:41:05.245 LINK rpc_client_test 00:41:05.813 CC test/nvme/sgl/sgl.o 00:41:06.751 CC examples/sock/hello_world/hello_sock.o 00:41:07.320 LINK sgl 00:41:08.256 LINK hello_sock 00:41:46.982 CC examples/vmd/lsvmd/lsvmd.o 00:41:46.982 LINK lsvmd 00:41:55.117 CC examples/nvmf/nvmf/nvmf.o 00:41:56.492 LINK nvmf 00:42:01.762 CC examples/util/zipf/zipf.o 00:42:02.021 LINK zipf 00:42:08.587 CC examples/thread/thread/thread_ex.o 00:42:09.523 LINK thread 00:42:11.426 CC app/spdk_nvme_perf/perf.o 00:42:11.426 CC test/thread/poller_perf/poller_perf.o 00:42:11.995 LINK poller_perf 00:42:13.372 LINK spdk_nvme_perf 00:42:13.939 CC test/nvme/e2edp/nvme_dp.o 00:42:14.507 CC test/thread/lock/spdk_lock.o 00:42:15.076 LINK nvme_dp 00:42:18.414 LINK spdk_lock 00:42:26.532 CC test/nvme/overhead/overhead.o 00:42:26.532 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:42:26.790 LINK histogram_ut 00:42:27.357 LINK overhead 00:42:37.334 CC test/unit/lib/accel/accel.c/accel_ut.o 00:42:37.334 CC examples/vmd/led/led.o 00:42:38.270 LINK led 00:42:48.283 LINK accel_ut 00:43:06.369 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:43:18.576 CC test/unit/lib/bdev/part.c/part_ut.o 00:43:18.576 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:43:18.576 LINK scsi_nvme_ut 00:43:25.146 CC test/nvme/err_injection/err_injection.o 00:43:25.146 LINK bdev_ut 00:43:25.410 LINK err_injection 00:43:27.313 LINK part_ut 00:43:27.881 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:43:27.881 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:43:28.819 LINK tree_ut 00:43:30.195 LINK blob_bdev_ut 00:43:34.384 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:43:35.761 CC test/unit/lib/blob/blob.c/blob_ut.o 00:43:39.043 CC examples/idxd/perf/perf.o 00:43:39.301 LINK blobfs_async_ut 00:43:40.676 LINK idxd_perf 00:43:42.052 CC examples/interrupt_tgt/interrupt_tgt.o 00:43:42.988 LINK interrupt_tgt 00:43:44.366 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:43:49.653 LINK blobfs_sync_ut 00:43:57.764 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:43:59.681 LINK gpt_ut 00:44:01.058 LINK blob_ut 00:44:13.263 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:44:13.264 CC test/unit/lib/dma/dma.c/dma_ut.o 00:44:13.264 LINK blobfs_bdev_ut 00:44:13.264 CC app/spdk_nvme_identify/identify.o 00:44:13.264 LINK dma_ut 00:44:15.167 CC test/nvme/startup/startup.o 00:44:15.426 CC test/nvme/reserve/reserve.o 00:44:15.685 LINK spdk_nvme_identify 00:44:16.254 LINK startup 00:44:16.513 LINK reserve 00:44:16.513 CC test/unit/lib/event/app.c/app_ut.o 00:44:17.081 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:44:18.018 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:44:18.586 LINK app_ut 00:44:20.537 LINK vbdev_lvol_ut 00:44:22.443 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:44:27.716 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:44:28.653 LINK bdev_ut 00:44:30.031 LINK reactor_ut 00:44:30.289 LINK bdev_raid_ut 00:44:32.192 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:44:34.096 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:44:34.664 LINK bdev_raid_sb_ut 00:44:37.201 LINK concat_ut 00:44:42.475 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:44:43.410 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:44:43.983 LINK bdev_zone_ut 00:44:46.538 LINK vbdev_zone_block_ut 00:44:46.538 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:44:47.915 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:44:48.482 CC test/unit/lib/bdev/raid/raid5f.c/raid5f_ut.o 00:44:49.857 LINK raid1_ut 00:44:49.857 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:44:51.235 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:44:51.494 LINK ioat_ut 00:44:51.753 LINK raid5f_ut 00:44:55.041 LINK conn_ut 00:44:55.300 CC app/spdk_nvme_discover/discovery_aer.o 00:44:56.236 LINK spdk_nvme_discover 00:44:57.616 LINK bdev_nvme_ut 00:44:57.616 CC app/spdk_top/spdk_top.o 00:44:58.552 CC app/vhost/vhost.o 00:44:58.811 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:44:59.070 LINK spdk_top 00:44:59.070 LINK vhost 00:44:59.638 LINK init_grp_ut 00:44:59.898 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:44:59.898 CC test/nvme/simple_copy/simple_copy.o 00:45:00.468 CC test/unit/lib/iscsi/param.c/param_ut.o 00:45:00.727 LINK simple_copy 00:45:01.295 LINK param_ut 00:45:01.864 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:45:02.432 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:45:03.810 LINK json_util_ut 00:45:03.810 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:45:04.379 LINK iscsi_ut 00:45:05.757 LINK json_write_ut 00:45:06.016 LINK json_parse_ut 00:45:09.306 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:45:10.242 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:45:11.177 LINK portal_grp_ut 00:45:11.436 LINK jsonrpc_server_ut 00:45:15.646 CC test/unit/lib/log/log.c/log_ut.o 00:45:15.646 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:45:15.905 LINK log_ut 00:45:16.840 CC test/unit/lib/notify/notify.c/notify_ut.o 00:45:17.776 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:45:17.776 LINK notify_ut 00:45:19.683 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:45:19.942 LINK lvol_ut 00:45:20.201 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:45:21.138 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:45:21.138 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:45:21.705 LINK tgt_node_ut 00:45:21.964 LINK nvme_ut 00:45:27.239 LINK subsystem_ut 00:45:27.239 LINK ctrlr_ut 00:45:27.239 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:45:27.239 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:45:27.498 LINK tcp_ut 00:45:28.435 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:45:28.435 LINK ctrlr_bdev_ut 00:45:30.970 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:45:30.970 LINK nvmf_ut 00:45:31.544 LINK ctrlr_discovery_ut 00:45:34.831 CC test/nvme/connect_stress/connect_stress.o 00:45:35.090 LINK connect_stress 00:45:36.993 LINK nvme_ctrlr_ut 00:45:36.993 CC test/nvme/boot_partition/boot_partition.o 00:45:37.251 LINK boot_partition 00:45:37.251 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:45:37.510 CC test/unit/lib/sock/sock.c/sock_ut.o 00:45:38.076 CC test/unit/lib/sock/posix.c/posix_ut.o 00:45:38.641 LINK dev_ut 00:45:38.898 CC app/spdk_dd/spdk_dd.o 00:45:39.834 LINK spdk_dd 00:45:40.092 LINK posix_ut 00:45:40.660 LINK sock_ut 00:45:40.919 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:45:45.107 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:45:47.643 LINK rdma_ut 00:45:47.643 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:45:48.578 LINK nvme_ctrlr_cmd_ut 00:45:49.512 LINK lun_ut 00:45:50.448 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:45:51.825 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:45:52.391 LINK scsi_ut 00:45:56.579 CC app/fio/nvme/fio_plugin.o 00:45:57.146 LINK transport_ut 00:45:58.082 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:45:58.649 LINK spdk_nvme 00:46:00.552 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:46:01.489 LINK nvme_ctrlr_ocssd_cmd_ut 00:46:04.051 LINK scsi_bdev_ut 00:46:07.338 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:46:07.905 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:46:09.807 LINK scsi_pr_ut 00:46:09.807 LINK nvme_ns_ut 00:46:10.374 CC test/nvme/compliance/nvme_compliance.o 00:46:11.748 LINK nvme_compliance 00:46:15.935 CC test/unit/lib/thread/thread.c/thread_ut.o 00:46:15.935 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:46:16.870 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:46:18.770 LINK iobuf_ut 00:46:19.028 CC test/unit/lib/util/base64.c/base64_ut.o 00:46:19.968 LINK base64_ut 00:46:20.239 LINK thread_ut 00:46:22.776 LINK nvme_ns_cmd_ut 00:46:25.310 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:46:25.310 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:46:26.687 LINK cpuset_ut 00:46:26.946 LINK bit_array_ut 00:46:31.137 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:46:31.137 LINK crc16_ut 00:46:31.396 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:46:31.963 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:46:32.222 LINK crc32_ieee_ut 00:46:32.788 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:46:33.354 LINK pci_event_ut 00:46:33.354 LINK crc32c_ut 00:46:34.290 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:46:34.549 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:46:34.807 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:46:35.748 LINK subsystem_ut 00:46:36.051 LINK rpc_ut 00:46:36.051 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:46:36.618 LINK crc64_ut 00:46:36.877 CC test/unit/lib/util/dif.c/dif_ut.o 00:46:39.409 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:46:39.668 LINK dif_ut 00:46:39.926 LINK nvme_ns_ocssd_cmd_ut 00:46:40.184 LINK rpc_ut 00:46:45.448 CC test/unit/lib/util/iov.c/iov_ut.o 00:46:45.448 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:46:45.448 CC test/unit/lib/util/math.c/math_ut.o 00:46:45.707 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:46:45.707 LINK iov_ut 00:46:45.965 LINK math_ut 00:46:46.900 LINK pipe_ut 00:46:46.900 LINK idxd_user_ut 00:46:47.159 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:46:48.094 CC test/unit/lib/util/string.c/string_ut.o 00:46:48.660 CC test/unit/lib/vhost/vhost.c/vhost_ut.o 00:46:48.660 LINK string_ut 00:46:49.594 CC test/unit/lib/rdma/common.c/common_ut.o 00:46:50.161 CC test/nvme/fused_ordering/fused_ordering.o 00:46:50.419 LINK common_ut 00:46:50.692 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:46:50.692 LINK fused_ordering 00:46:50.692 LINK nvme_pcie_ut 00:46:51.637 CC test/unit/lib/util/xor.c/xor_ut.o 00:46:52.570 LINK idxd_ut 00:46:52.570 LINK xor_ut 00:46:52.570 LINK vhost_ut 00:46:52.828 CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o 00:46:53.761 LINK ftl_l2p_ut 00:46:54.696 CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o 00:46:55.631 CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o 00:46:56.567 CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o 00:46:57.135 CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o 00:46:57.394 LINK ftl_io_ut 00:46:57.652 LINK ftl_bitmap_ut 00:46:57.652 LINK ftl_band_ut 00:46:58.220 LINK ftl_mempool_ut 00:47:00.755 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:47:01.689 CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o 00:47:01.689 CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o 00:47:03.591 LINK nvme_poll_group_ut 00:47:03.591 CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o 00:47:03.591 LINK ftl_mngt_ut 00:47:03.850 LINK ftl_sb_ut 00:47:05.785 CC app/fio/bdev/fio_plugin.o 00:47:05.785 LINK ftl_layout_upgrade_ut 00:47:06.722 CC test/nvme/doorbell_aers/doorbell_aers.o 00:47:06.981 LINK spdk_bdev 00:47:07.548 LINK doorbell_aers 00:47:08.484 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:47:08.743 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:47:10.119 LINK nvme_quirks_ut 00:47:10.684 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:47:10.684 LINK nvme_qpair_ut 00:47:12.059 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:47:12.623 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:47:13.997 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:47:13.997 LINK nvme_transport_ut 00:47:14.255 LINK nvme_tcp_ut 00:47:14.514 LINK nvme_io_msg_ut 00:47:17.046 LINK nvme_pcie_common_ut 00:47:17.319 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:47:17.591 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:47:18.159 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:47:19.535 LINK nvme_opal_ut 00:47:19.795 LINK nvme_fabric_ut 00:47:21.173 CC test/nvme/fdp/fdp.o 00:47:22.109 LINK fdp 00:47:22.367 LINK nvme_rdma_ut 00:47:22.367 CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o 00:47:22.626 CC test/nvme/cuse/cuse.o 00:47:25.161 LINK cuse 00:47:25.728 LINK nvme_cuse_ut 00:48:33.424 07:49:04 -- spdk/autopackage.sh@38 -- $ make -j10 clean 00:48:33.424 make[1]: Nothing to be done for 'clean'. 00:48:34.015 07:49:07 -- spdk/autopackage.sh@40 -- $ timing_exit build_release 00:48:34.015 07:49:07 -- common/autotest_common.sh@716 -- $ xtrace_disable 00:48:34.015 07:49:07 -- common/autotest_common.sh@10 -- $ set +x 00:48:34.015 07:49:07 -- spdk/autopackage.sh@42 -- $ timing_finish 00:48:34.015 07:49:07 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:48:34.015 07:49:07 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:48:34.015 07:49:07 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:48:34.273 + [[ -n 2371 ]] 00:48:34.273 + sudo kill 2371 00:48:34.282 [Pipeline] } 00:48:34.301 [Pipeline] // timeout 00:48:34.306 [Pipeline] } 00:48:34.323 [Pipeline] // stage 00:48:34.329 [Pipeline] } 00:48:34.346 [Pipeline] // catchError 00:48:34.355 [Pipeline] stage 00:48:34.357 [Pipeline] { (Stop VM) 00:48:34.371 [Pipeline] sh 00:48:34.652 + vagrant halt 00:48:37.194 ==> default: Halting domain... 00:48:47.184 [Pipeline] sh 00:48:47.465 + vagrant destroy -f 00:48:50.753 ==> default: Removing domain... 00:48:51.333 [Pipeline] sh 00:48:51.610 + mv output /var/jenkins/workspace/ubuntu20-vg-autotest/output 00:48:51.619 [Pipeline] } 00:48:51.638 [Pipeline] // stage 00:48:51.644 [Pipeline] } 00:48:51.661 [Pipeline] // dir 00:48:51.667 [Pipeline] } 00:48:51.685 [Pipeline] // wrap 00:48:51.692 [Pipeline] } 00:48:51.707 [Pipeline] // catchError 00:48:51.717 [Pipeline] stage 00:48:51.719 [Pipeline] { (Epilogue) 00:48:51.734 [Pipeline] sh 00:48:52.014 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:49:06.908 [Pipeline] catchError 00:49:06.910 [Pipeline] { 00:49:06.924 [Pipeline] sh 00:49:07.205 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:49:07.205 Artifacts sizes are good 00:49:07.215 [Pipeline] } 00:49:07.232 [Pipeline] // catchError 00:49:07.242 [Pipeline] archiveArtifacts 00:49:07.249 Archiving artifacts 00:49:07.549 [Pipeline] cleanWs 00:49:07.561 [WS-CLEANUP] Deleting project workspace... 00:49:07.561 [WS-CLEANUP] Deferred wipeout is used... 00:49:07.567 [WS-CLEANUP] done 00:49:07.569 [Pipeline] } 00:49:07.586 [Pipeline] // stage 00:49:07.592 [Pipeline] } 00:49:07.608 [Pipeline] // node 00:49:07.613 [Pipeline] End of Pipeline 00:49:07.691 Finished: SUCCESS